8 research outputs found

    Implementation of a Large-Scale Platform for Cyber-Physical System Real-Time Monitoring

    Get PDF
    The emergence of Industry 4.0 and the Internet of Things (IoT) has meant that the manufacturing industry has evolved from embedded systems to cyber-physical systems (CPSs). This transformation has provided manufacturers with the ability to measure the performance of industrial equipment by means of data gathered from on-board sensors. This allows the status of industrial systems to be monitored and can detect anomalies. However, the increased amount of measured data has prompted many companies to investigate innovative ways to manage these volumes of data. In recent years, cloud computing and big data technologies have emerged among the scientific communities as key enabling technologies to address the current needs of CPSs. This paper presents a large-scale platform for CPS real-time monitoring based on big data technologies, which aims to perform real-time analysis that targets the monitoring of industrial machines in a real work environment. This paper is validated by implementing the proposed solution on a real industrial use case that includes several industrial press machines. The formal experiments in a real scenario are conducted to demonstrate the effectiveness of this solution and also its adequacy and scalability for future demand requirements. As a result of the implantation of this solution, the overall equipment effectiveness has been improved.The authors are grateful to Goizper and Fagor Arrasate for providing the industrial case study, and specifically Jon Rodriguez and David Chico (Fagor Arrasate) for their help and support. Any opinions, findings and conclusions expressed in this article are those of the authors and do not necessarily reflect the views of the funding agencies

    Big Data Now, 2015 Edition

    Get PDF
    Now in its fifth year, O’Reilly’s annual Big Data Now report recaps the trends, tools, applications, and forecasts we’ve talked about over the past year. For 2015, we’ve included a collection of blog posts, authored by leading thinkers and experts in the field, that reflect a unique set of themes we’ve identified as gaining significant attention and traction. Our list of 2015 topics include: Data-driven cultures Data science Data pipelines Big data architecture and infrastructure The Internet of Things and real time Applications of big data Security, ethics, and governance Is your organization on the right track? Get a hold of this free report now and stay in tune with the latest significant developments in big data

    Adaptive Failure-Aware Scheduling for Hadoop

    Get PDF
    Given the dynamic nature of cloud environments, failures are the norm rather than the exception in data centers powering cloud frameworks. Despite the diversity of integrated recovery mechanisms in cloud frameworks, their schedulers still generate poor scheduling decisions leading to tasks' failures due to unforeseen events such as unpredicted demands of services or hardware outages. Traditionally, simulation and analytical modeling have been widely used to analyze the impact of the scheduling decisions on the failures rates. However, they cannot provide accurate results and exhaustive coverage of the cloud systems especially when failures occur. In this thesis, we present new approaches for modeling and verifying an adaptive failure-aware scheduling algorithm for Hadoop to early detect these failures and to reschedule tasks according to changes in the cloud. Hadoop is the framework of choice on many off-the-shelf clusters in the cloud to process data-intensive applications by efficiently running them across distributed multiple machines. The proposed scheduling algorithm for Hadoop relies on predictions made by machine learning algorithms trained on previously executed tasks and data collected from the Hadoop environment. To further improve Hadoop scheduling decisions on the fly, we use reinforcement learning techniques to select an appropriate scheduling action for a scheduled task. Furthermore, we propose an adaptive algorithm to dynamically detect failures of nodes in Hadoop. We implement the above approaches in ATLAS: an AdapTive Failure-Aware Scheduling algorithm that can be built on top of existing Hadoop schedulers. To illustrate the usefulness and benefits of ATLAS, we conduct a large empirical study on a Hadoop cluster deployed on Amazon Elastic MapReduce (EMR) to compare the performance of ATLAS to those of three Hadoop scheduling algorithms (FIFO, Fair, and Capacity). Results show that ATLAS outperforms these scheduling algorithms in terms of failures' rates, execution times, and resources utilization. Finally, we propose a new methodology to formally identify the impact of the scheduling decisions of Hadoop on the failures rates. We use model checking to verify some of the most important scheduling properties in Hadoop (schedulability, resources-deadlock freeness, and fairness) and provide possible strategies to avoid their occurrences in ATLAS. The formal verification of the Hadoop scheduler allows to identify more tasks failures and hence reduce the number of failures in ATLAS

    Ontology-based Consistent Specification and Scalable Execution of Sensor Data Acquisition Plans in Cross-Domain loT Platforms

    Get PDF
    Nowadays there is an increased number of vertical Internet of Things (IoT) applications that have been developed within IoT Platforms that often do not interact with each other because of the adoption of different standards and formats. Several efforts are devoted to the construction of software infrastructures that facilitate the interoperability among heterogeneous cross-domain IoT platforms for the realization of horizontal applications. Even if their realization poses different challenges across all layers of the network stack, in this thesis we focus on the interoperability issues that arise at the data management layer. Starting from a flexible multi-granular Spatio-Temporal-Thematic data model according to which events generated by different kinds of sensors can be represented, we propose a Semantic Virtualization approach according to which the sensors belonging to different IoT platforms and the schema of the produced event streams are described in a Domain Ontology, obtained through the extension of the well-known ontologies (SSN and IoT-Lite ontologies) to the needs of a specific domain. Then, these sensors can be exploited for the creation of Data Acquisition Plans (DAPs) by means of which the streams of events can be filtered, merged, and aggregated in a meaningful way. Notions of soundness and consistency are introduced to bind the output streams of the services contained in the DAP with the Domain Ontology for providing a semantic description of its final output. The facilities of the \streamLoader prototype are finally presented for supporting the domain experts in the Semantic Virtualization of the sensors and for the construction of meaningful DAPs. Different graphical facilities have been developed for supporting domain experts in the development of complex DAPs. The system provides also facilities for their syntax-based translations in the Apache Spark Streaming language and execution in real time in a distributed cluster of machines

    Big Data in Bioeconomy

    Get PDF
    This edited open access book presents the comprehensive outcome of The European DataBio Project, which examined new data-driven methods to shape a bioeconomy. These methods are used to develop new and sustainable ways to use forest, farm and fishery resources. As a European initiative, the goal is to use these new findings to support decision-makers and producers – meaning farmers, land and forest owners and fishermen. With their 27 pilot projects from 17 countries, the authors examine important sectors and highlight examples where modern data-driven methods were used to increase sustainability. How can farmers, foresters or fishermen use these insights in their daily lives? The authors answer this and other questions for our readers. The first four parts of this book give an overview of the big data technologies relevant for optimal raw material gathering. The next three parts put these technologies into perspective, by showing useable applications from farming, forestry and fishery. The final part of this book gives a summary and a view on the future. With its broad outlook and variety of topics, this book is an enrichment for students and scientists in bioeconomy, biodiversity and renewable resources

    The Nexus Between Security Sector Governance/Reform and Sustainable Development Goal-16

    Get PDF
    This Security Sector Reform (SSR) Paper offers a universal and analytical perspective on the linkages between Security Sector Governance (SSG)/SSR (SSG/R) and Sustainable Development Goal-16 (SDG-16), focusing on conflict and post-conflict settings as well as transitional and consolidated democracies. Against the background of development and security literatures traditionally maintaining separate and compartmentalized presence in both academic and policymaking circles, it maintains that the contemporary security- and development-related challenges are inextricably linked, requiring effective measures with an accurate understanding of the nature of these challenges. In that sense, SDG-16 is surely a good step in the right direction. After comparing and contrasting SSG/R and SDG-16, this SSR Paper argues that human security lies at the heart of the nexus between the 2030 Agenda of the United Nations (UN) and SSG/R. To do so, it first provides a brief overview of the scholarly and policymaking literature on the development-security nexus to set the background for the adoption of The Agenda 2030. Next, it reviews the literature on SSG/R and SDGs, and how each concept evolved over time. It then identifies the puzzle this study seeks to address by comparing and contrasting SSG/R with SDG-16. After making a case that human security lies at the heart of the nexus between the UN’s 2030 Agenda and SSG/R, this book analyses the strengths and weaknesses of human security as a bridge between SSG/R and SDG-16 and makes policy recommendations on how SSG/R, bolstered by human security, may help achieve better results on the SDG-16 targets. It specifically emphasizes the importance of transparency, oversight, and accountability on the one hand, and participative approach and local ownership on the other. It concludes by arguing that a simultaneous emphasis on security and development is sorely needed for addressing the issues under the purview of SDG-16

    Implementación de un sistema de software para unificar historias clínicas en centros de salud de la ciudad de Arequipa

    Get PDF
    Actualmente el sector salud utiliza las historias clínicas físicas, lo que ocasiona mucho déficit en la atención al paciente por parte del especialista y del personal encargado, por ejemplo: que la historia clínica física (HCF) no se encuentre en el momento solicitado, que los diagnósticos anteriores sean ilegibles, los resultados de exámenes no estén actualizados o simplemente estén extraviados; cometiendo negligencias y borrando evidencias de las mismas. Además, los diferentes centros de salud de la ciudad de Arequipa en la actualidad desperdician espacios en el almacenamiento de las HCF que no son aptas contribuyendo esto al deterioro y/o pérdida. El presente proyecto tiene por finalidad el desarrollo e implementación de unsistema de software, que permita unificar las historias clínicas de los centros de salud ubicados en la ciudad de Arequipa; enfocado para dar solución y responder a las expectativas de atención oportuna de los pacientes con los servicios de salud. El sistema que se encuentra en cloud computing utiliza un motor de búsqueda escalable o también denominado un gestor de datos basado en documentos (ElasticSearch)y un esquema relacional de datos enMySQL denominado (Historial_Clínico). En el motor de búsqueda se va almacenar los historiales clínicos digitales y en el esquema relacional se plasmó la lógica denegocio. El software desarrollado en esta tesis denominado HIS (Health Integrated System) es un sistema web responsivo que nos permite unificar las historias clínicas, con la finalidad de asistir a los médicos logrando disponer de manera inmediata de la información contenida en las historias clínicas. En conclusión, el sistema nos presenta información importante de las historias clínicas electrónicas de cada paciente, respetando los principios y normas de la Ley 30024 que cumple con la confidencialidad e integridad de las historias clínicas electrónicas, realizamos pruebas con pacientes y especialistas de la salud logrando buenos resultados, finalmente el sistema es escalable, distribuido y nos permite visualizar las historias clínicas electrónicas unificadas. Palabras claves: HCF, Sistema HIS, ElasticSearchTesi
    corecore