2,134 research outputs found

    Grid infrastructures for the electronics domain: requirements and early prototypes from an EPSRC pilot project

    Get PDF
    The fundamental challenges facing future electronics design is to address the decreasing – atomistic - scale of transistor devices and to understand and predict the impact and statistical variability these have on design of circuits and systems. The EPSRC pilot project “Meeting the Design Challenges of nanoCMOS Electronics” (nanoCMOS) which began in October 2006 has been funded to explore this space. This paper outlines the key requirements that need to be addressed for Grid technology to support the various research strands in this domain, and shows early prototypes demonstrating how these requirements are being addressed

    Towards a grid-enabled simulation framework for nano-CMOS electronics

    Get PDF
    The electronics design industry is facing major challenges as transistors continue to decrease in size. The next generation of devices will be so small that the position of individual atoms will affect their behaviour. This will cause the transistors on a chip to have highly variable characteristics, which in turn will impact circuit and system design tools. The EPSRC project "Meeting the Design Challenges of Nano-CMOS Electronics" (Nana-CMOS) has been funded to explore this area. In this paper, we describe the distributed data-management and computing framework under development within Nano-CMOS. A key aspect of this framework is the need for robust and reliable security mechanisms that support distributed electronics design groups who wish to collaborate by sharing designs, simulations, workflows, datasets and computation resources. This paper presents the system design, and an early prototype of the project which has been useful in helping us to understand the benefits of such a grid infrastructure. In particular, we also present two typical use cases: user authentication, and execution of large-scale device simulations

    A National Collaboratory to Advance the Science of High Temperature Plasma Physics for Magnetic Fusion

    Get PDF
    This report summarizes the work of the National Fusion Collaboratory (NFC) Project to develop a persistent infrastructure to enable scientific collaboration for magnetic fusion research. The original objective of the NFC project was to develop and deploy a national FES Grid (FusionGrid) that would be a system for secure sharing of computation, visualization, and data resources over the Internet. The goal of FusionGrid was to allow scientists at remote sites to participate as fully in experiments and computational activities as if they were working on site thereby creating a unified virtual organization of the geographically dispersed U.S. fusion community. The vision for FusionGrid was that experimental and simulation data, computer codes, analysis routines, visualization tools, and remote collaboration tools are to be thought of as network services. In this model, an application service provider (ASP provides and maintains software resources as well as the necessary hardware resources. The project would create a robust, user-friendly collaborative software environment and make it available to the US FES community. This Grid's resources would be protected by a shared security infrastructure including strong authentication to identify users and authorization to allow stakeholders to control their own resources. In this environment, access to services is stressed rather than data or software portability

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study

    On Usage Control for GRID Systems

    Get PDF
    This paper introduces a formal model, an architecture and a prototype implementation for usage control on GRID systems. The usage control model (UCON) is a new access control paradigm proposed by Park and Sandhu that encompasses and extends several existing models (e.g. MAC, DAC, Bell-Lapadula, RBAC, etc). Its main novelty is based on continuity of the access monitoring and mutability of attributes of subjects and objects. We identified this model as a perfect candidate for managing access/usage control in GRID systems due to their peculiarities, where continuity of control is a central issue. Here we adapt the original UCON model to develop a full model for usage control in GRID systems. We use as policy specification language a process description language and show how this is suitable to model the usage policy models of the original UCON model. We also describe a possible architecture to implement the usage control model. Moreover, we describe a prototype implementation for usage control of GRID computational services, and we show how our language can be used to define a security policy that regulates the usage of network communications to protect the local computational service from the applications that are executed on behalf of remote GRID users

    SciTokens: Capability-Based Secure Access to Remote Scientific Data

    Full text link
    The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US
    • 

    corecore