1,185 research outputs found

    E-infrastructures fostering multi-centre collaborative research into the intensive care management of patients with brain injury

    Get PDF
    Clinical research is becoming ever more collaborative with multi-centre trials now a common practice. With this in mind, never has it been more important to have secure access to data and, in so doing, tackle the challenges of inter-organisational data access and usage. This is especially the case for research conducted within the brain injury domain due to the complicated multi-trauma nature of the disease with its associated complex collation of time-series data of varying resolution and quality. It is now widely accepted that advances in treatment within this group of patients will only be delivered if the technical infrastructures underpinning the collection and validation of multi-centre research data for clinical trials is improved. In recognition of this need, IT-based multi-centre e-Infrastructures such as the Brain Monitoring with Information Technology group (BrainIT - www.brainit.org) and Cooperative Study on Brain Injury Depolarisations (COSBID - www.cosbid.de) have been formed. A serious impediment to the effective implementation of these networks is access to the know-how and experience needed to install, deploy and manage security-oriented middleware systems that provide secure access to distributed hospital based datasets and especially the linkage of these data sets across sites. The recently funded EU framework VII ICT project Advanced Arterial Hypotension Adverse Event prediction through a Novel Bayesian Neural Network (AVERT-IT) is focused upon tackling these challenges. This chapter describes the problems inherent to data collection within the brain injury medical domain, the current IT-based solutions designed to address these problems and how they perform in practice. We outline how the authors have collaborated towards developing Grid solutions to address the major technical issues. Towards this end we describe a prototype solution which ultimately formed the basis for the AVERT-IT project. We describe the design of the underlying Grid infrastructure for AVERT-IT and how it will be used to produce novel approaches to data collection, data validation and clinical trial design is also presented

    Interoperable e-Infrastructure Services in Arabia

    Get PDF
    e-Infrastructures became critical platforms that integrate computational resources, facilities and repositories globally. The coordination and harmonization of advanced e-Infrastructure project developed with partners from Europe, Latin America, Arabia, Africa, China, and India contributed to developing interoperable platforms based on identity federation and science gateway technologies. This paper presents these technologies to support key services in the development of Arabia networking and services platform for research and education. The platform provides scientists, teachers, and students with seamless access to a variety of advanced resources, services, and applications available at regional e-Infrastructures in Europe and elsewhere. Users simply enter the credentials provided by their home institutions to get authenticated and do not need digital certificate-based mechanisms. Twenty applications from five scientific domains were deployed and integrated. Results showed that on average about 35,000 monthly jobs are running for a total of about 17,500 CPU wall-clock hours. Therefore, seamlessly integrated e-Infrastructures for regional e-Science activities are important resources that support scientists, students, and faculty with computational services and linkage to global research communities

    The Anatomy of the Grid - Enabling Scalable Virtual Organizations

    Get PDF
    "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In this article, we define this new field. First, we review the "Grid problem," which we define as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources-what we refer to as virtual organizations. In such settings, we encounter unique authentication, authorization, resource access, resource discovery, and other challenges. It is this class of problem that is addressed by Grid technologies. Next, we present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. We describe requirements that we believe any such mechanisms must satisfy, and we discuss the central role played by the intergrid protocols that enable interoperability among different Grid systems. Finally, we discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. We maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.Comment: 24 pages, 5 figure

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Security Enhancement of IoT and Fog Computing Via Blockchain Applications

    Get PDF
    Blockchain technology is now becoming highly appealing to the next generation because it is better tailored to the information age. Blockchain technologies can also be used in the Internet of Things (IoT) and fog computing. The development of IoT and Fog Computing technologies in different fields has resulted in a major improvement in distributed networks. Blockchain technology is now becoming highly appealing to the next generation because it is better tailored to the information age. Blockchain technologies can also be used in IoT and fog computing.  The blockchain principle necessitates a transparent data storage mechanism for storing and exchanging data and transactions throughout the network. In this paper, first, we explained Blockchain, its architecture, and its security. Then we view Blockchain application in IoT security. Then we explained Fog computing, Generic Security Requirements for Fog Computing, and we also discuss Blockchain applications that enhance Fog Computing Security. Finally, we conduct a review of some recent literature on using Blockchain applications to improve the security of IoT and fog computing and a comparison of the methods proposed in the literature

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study

    Grid Security and Trust Management Overview

    Get PDF
    Abstract Security is one of the most important aspects in all grid environments. Researchers and engineers developed many technologies and frameworks used to establish an environment, in which users can use grid capabilities in a secure manner. In traditional grid environments security is based on user authentication and authorization of user's actions on shared resources. However, this approach demands a pre-established trust relationship between the grid users and the resource providers. Security based on trust management enables dynamic creation of trust relationships between unknown parties. This article reviews various trust models designed for grid environments and lists their main characteristics and purpose in traditional and emerging grids
    • …
    corecore