995 research outputs found

    Dataflow-Oriented Provenance System for Multifusion Wireless Sensor Networks

    Get PDF
    We present a dataflow-oriented provenance system for data fusion sensor networks. This model works best with net- works sensing dynamic objects and although our system is generic, we model it on a proximity binary sensor network. We introduce a network-level fault-tolerance mechanism by using the cognitive strength of provenance models. Our provenance model reduce the limitations of a sensor’s capability and decrease the error-prone nature of wireless sen- sor networks. In addition provenance data is used in order to efficiently build the dynamic data fusion scenario and to adjust the network such as turning of some sensors. In a fault-tolerant, self-adjusting sensor network, sensor data produce more accurate results and with the improvements, tasks such as target localization is more precisely done. One other aspect of our network is that by having computation nodes spread to the network, the computation is done in a distributed manner and as nodes make decisions based on the provenance and fusion data available, the network has a distributed intelligence. Keywords: Multifusion, Wireless Sensor Networks, Open Provenance Mode

    Helmholtz Portfolio Theme Large-Scale Data Management and Analysis (LSDMA)

    Get PDF
    The Helmholtz Association funded the "Large-Scale Data Management and Analysis" portfolio theme from 2012-2016. Four Helmholtz centres, six universities and another research institution in Germany joined to enable data-intensive science by optimising data life cycles in selected scientific communities. In our Data Life cycle Labs, data experts performed joint R&D together with scientific communities. The Data Services Integration Team focused on generic solutions applied by several communities

    Towards Loosely-Coupled Programming on Petascale Systems

    Full text link
    We have extended the Falkon lightweight task execution framework to make loosely coupled programming on petascale systems a practical and useful programming model. This work studies and measures the performance factors involved in applying this approach to enable the use of petascale systems by a broader user community, and with greater ease. Our work enables the execution of highly parallel computations composed of loosely coupled serial jobs with no modifications to the respective applications. This approach allows a new-and potentially far larger-class of applications to leverage petascale systems, such as the IBM Blue Gene/P supercomputer. We present the challenges of I/O performance encountered in making this model practical, and show results using both microbenchmarks and real applications from two domains: economic energy modeling and molecular dynamics. Our benchmarks show that we can scale up to 160K processor-cores with high efficiency, and can achieve sustained execution rates of thousands of tasks per second.Comment: IEEE/ACM International Conference for High Performance Computing, Networking, Storage and Analysis (SuperComputing/SC) 200

    Data integration and FAIR data management in Solid Earth Science

    Get PDF
    Integrated use of multidisciplinary data is nowadays a recognized trend in scientific research, in particular in the domain of solid Earth science where the understanding of a physical process is improved and made complete by different types of measurements – for instance, ground acceleration, SAR imaging, crustal deformation – describing a physical phenomenon. FAIR principles are recognized as a means to foster data integration by providing a common set of criteria for building data stewardship systems for Open Science. However, the implementation of FAIR principles raises issues along dimensions like governance and legal beyond, of course, the technical one. In the latter, in particular, the development of FAIR data provision systems is often delegated to Research Infrastructures or data providers, with support in terms of metrics and best practices offered by cluster projects or dedicated initiatives. In the current work, we describe the approach to FAIR data management in the European Plate Observing System (EPOS), a distributed research infrastructure in the solid Earth science domain that includes more than 250 individual research infrastructures across 25 countries in Europe. We focus in particular on the technical aspects, but including also governance, policies and organizational elements, by describing the architecture of the EPOS delivery framework both from the organizational and technical point of view and by outlining the key principles used in the technical design. We describe how a combination of approaches, namely rich metadata and service-based systems design, are required to achieve data integration. We show the system architecture and the basic features of the EPOS data portal, that integrates data from more than 220 services in a FAIR way. The construction of such a portal was driven by the EPOS FAIR data management approach, that by defining a clear roadmap for compliance with the FAIR principles, produced a number of best practices and technical approaches for complying with the FAIR principles. Such a work, that spans over a decade but concentrates the key efforts in the last 5 years with the EPOS Implementation Phase project and the establishment of EPOS-ERIC, was carried out in synergy with other EU initiatives dealing with FAIR data. On the basis of the EPOS experience, future directions are outlined, emphasizing the need to provide i) FAIR reference architectures that can ease data practitioners and engineers from the domain communities to adopt FAIR principles and build FAIR data systems; ii) a FAIR data management framework addressing FAIR through the entire data lifecycle, including reproducibility and provenance; and iii) the extension of the FAIR principles to policies and governance dimensions.publishedVersio

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    A Model For Improving Ethics In Construction Materials And Products Supply Chain Using Blockchain

    Get PDF
    There are countless materials and products that make up a building, including cladding, glazing, roofing, floors, ceilings, systems, etc., and the hidden and fragmented structure of the supply chain makes it highly vulnerable to several forms of ethical breaches at different tiers. Consumers also are increasingly concerned about where the products they are buying come from, highlighting important areas of concern that include the ethical, environmental, and social issues. Whereas current research identifies digitalization as a key part of providing transparency and increasing fairness in supply chains, and blockchain technology is lauded as having the potential to deliver this. However, while there has been a growing emphasis on ethics in construction in recent years, and an increase in studies around blockchain, there remains a paucity of studies related to how blockchain may help to improve the environmental and social dimensions of ethics in construction supply chains. A gap that this study fills through a holistic triple bottom line (TBL) approach. To achieve this, the study aims to develop and validate a model for improving ethics in construction materials and products supply chains (CMPSC) following the TBL construct using blockchain technology. The study also explores the current state of ethics in the CMPSC and the implementations of blockchain for ethics and applies the learnings to develop a conceptual model to improve environmental, social and business ethics in the CMPSC using blockchain. The model was then refined and validated via a dual-phase validation protocol consisting of expert interviews and focus group discussions. A total of 30 participants participated in this study, this comprised of 16 construction industry supply chain professionals, 10 professionals in the ethics/ sustainability in construction and 4 blockchain technology experts. NVivo 12 was utilised to thematically analyse both the interviews and the focus group data. This approach was utilised to investigate the data from both a data-driven perspective (a perspective based on coding in an inductive way); and from the research question perspective (to check if the data is consistent with the research questions and if it provides sufficient information). The 30 interviews resulted in 4 high-level themes, 15 mid-level themes and 28 low-level themes, with the total number of codes within the themes being 721. The analysis of the focus group data resulted in 3 high-level themes and 10 mid-level themes, bringing the total number of codes within all themes to 74. Results from this study revealed that the effectiveness of current ethical measures in the CMPSC has been limited due to weak implementation and compliance, the inability of the government to play its role, and the outright denial of unethical practises within supply chains. Results also show that even though greater emphasis is placed on the business component of ethics while the environmental or social component may only receive as much attention if it can be monetised or if it is demanded; nonetheless, the current state of ethics in the CMPSC remains weak across the three dimensions examined. Further results show that while blockchain may help improve ethics in the CMPSC, in addition to the transparency and digitization that technology provides, the need for education and the upholding of personal ethical values by supply chain players are key to the success of both current and new ethical supply chain initiatives. Individuals must first be made ethically aware in order to act ethically; only then may the implementation of a technological tool prosper. The main contribution of this study to knowledge is the development of a model for improving ethics in the CMPSC within the TBL construct through blockchain technology. The model developed in this study provides practical clarity on how blockchain may be implemented within fragmented supply chains and a significant understanding of a socio-technical approach to addressing the issue of ethics within construction supply chains. It also has a vital role in helping the intended users and actors improve their knowledge of the technology and how blockchain can help to improve ethics in the CMPSC and also understand their roles and responsibilities on the network, thereby providing a framework and prerequisite guidance for the Blockchain-as-a-Service (BaaS) providers in the development of the computer model (blockchain network). The findings of this thesis demonstrate new insights and contribute to the existing body of knowledge by further advancing the discussion on the role of the blockchain in the construction industry

    A Review on Modern Distributed Computing Paradigms: Cloud Computing, Jungle Computing and Fog Computing

    Get PDF
    The distributed computing attempts to improve performance in large-scale computing problems by resource sharing. Moreover, rising low-cost computing power coupled with advances in communications/networking and the advent of big data, now enables new distributed computing paradigms such as Cloud, Jungle and Fog computing.Cloud computing brings a number of advantages to consumers in terms of accessibility and elasticity. It is based on centralization of resources that possess huge processing power and storage capacities. Fog computing, in contrast, is pushing the frontier of computing away from centralized nodes to the edge of a network, to enable computing at the source of the data. On the other hand, Jungle computing includes a simultaneous combination of clusters, grids, clouds, and so on, in order to gain maximum potential computing power.To understand these new buzzwords, reviewing these paradigms together can be useful. Therefore, this paper describes the advent of new forms of distributed computing. It provides a definition for Cloud, Jungle and Fog computing, and the key characteristics of them are determined. In addition, their architectures are illustrated and, finally, several main use cases are introduced
    • …
    corecore