2,170 research outputs found

    Towards a Swiss National Research Infrastructure

    Full text link
    In this position paper we describe the current status and plans for a Swiss National Research Infrastructure. Swiss academic and research institutions are very autonomous. While being loosely coupled, they do not rely on any centralized management entities. Therefore, a coordinated national research infrastructure can only be established by federating the various resources available locally at the individual institutions. The Swiss Multi-Science Computing Grid and the Swiss Academic Compute Cloud projects serve already a large number of diverse user communities. These projects also allow us to test the operational setup of such a heterogeneous federated infrastructure

    A Shibboleth-protected privilege management infrastructure for e-science education

    Get PDF
    Simplifying access to and usage of large scale compute resources via the grid is of critical importance to encourage the uptake of e-research. Security is one aspect that needs to be made as simple as possible for end users. The ESP-Grid and DyVOSE projects at the National e-Science Centre (NeSC) at the University of Glasgow are investigating security technologies which will make the end-user experience of using the grid easier and more secure. In this paper, we outline how simplified (from the user experience) authentication and authorization of users are achieved through single usernames and passwords at users' home institutions. This infrastructure, which will be applied in the second year of the grid computing module part of the advanced MSc in Computing Science at the University of Glasgow, combines grid portal technology, the Internet2 Shibboleth Federated Access Control infrastructure, and the PERMS role-based access control technology. Through this infrastructure inter-institutional teaching can be supported where secure access to federated resources is made possible between sites. A key aspect of the work we describe here is the ability to support dynamic delegation of authority whereby local/remote administrators are able to dynamically assign meaningful privileges to remote/local users respectively in a trusted manner thus allowing for the dynamic establishment of virtual organizations with fine grained security at their heart

    Development of grid frameworks for clinical trials and epidemiological studies

    Get PDF
    E-Health initiatives such as electronic clinical trials and epidemiological studies require access to and usage of a range of both clinical and other data sets. Such data sets are typically only available over many heterogeneous domains where a plethora of often legacy based or in-house/bespoke IT solutions exist. Considerable efforts and investments are being made across the UK to upgrade the IT infrastructures across the National Health Service (NHS) such as the National Program for IT in the NHS (NPFIT) [1]. However, it is the case that currently independent and largely non-interoperable IT solutions exist across hospitals, trusts, disease registries and GP practices – this includes security as well as more general compute and data infrastructures. Grid technology allows issues of distribution and heterogeneity to be overcome, however the clinical trials domain places special demands on security and data which hitherto the Grid community have not satisfactorily addressed. These challenges are often common across many studies and trials hence the development of a re-usable framework for creation and subsequent management of such infrastructures is highly desirable. In this paper we present the challenges in developing such a framework and outline initial scenarios and prototypes developed within the MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project [2]

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    WLCG Security Operations Centres Working Group

    Get PDF
    Security monitoring is an area of considerable interest for sites in the Worldwide LHC Computing Grid (WLCG), particularly as we move as a community towards the use of a growing range of computing models and facilities. There is an increasingly large set of tools available for these purposes, many of which work in concert and use concepts drawn from the use of analytics for Big Data. The integration of these tools into what is commonly called a Security Operations Centre (SOC), however, can be a complex task - the open source project Apache Metron (which at the time of writing is in incubator stage and is an evolution of the earlier OpenSOC project) is a popular example of one such integration. At the same time, the necessary scope and rollout of such tools can vary widely for sites of different sizes and topologies. Nevertheless, the use of such platforms could be critical for security in modern Grid and Cloud sites across all scientific disciplines. In parallel, the use and need for threat intelligence sharing is at a key stage and is an important component of a SOC. Grid and Cloud security is a global endeavour - modern threats can affect the entire community, and trust between sites is of utmost importance. Threat intelligence sharing platforms are a vital component to building this trust as well as propagating useful threat data. The MISP software (Malware Information Sharing Platform) is a very popular and exible tool for this purpose, in use at a wide range of organizations in different domains across the world. In this context we present the work of the WLCG Security Operations Centres Work- ing Group, which was created to coordinate activities in these areas across the WLCG. The mandate of this group includes the development of a scalable SOC reference design applicable for a range of sites by examining current and prospective SOC projects & tools. In particular we report on the first work on the deployment of MISP and the Bro Intru- sion Detection System at a number of WLCG sites as SOC components, including areas of integration between these tools. We also report on our future roadmap and framework, which includes the Apache Metron project

    Adapting federated cyberinfrastructure for shared data collection facilities in structural biology

    Get PDF
    It has been difficult, historically, to manage and maintain early-stage experimental data collected by structural biologists in synchrotron facilities. This work describes a prototype system that adapts existing federated cyberinfrastructure technology and techniques to manage collected data at synchrotrons and to facilitate the efficient and secure transfer of data to the owner’s home institution

    Global state, local decisions: Decentralized NFV for ISPs via enhanced SDN

    Get PDF
    The network functions virtualization paradigm is rapidly gaining interest among Internet service providers. However, the transition to this paradigm on ISP networks comes with a unique set of challenges: legacy equipment already in place, heterogeneous traffic from multiple clients, and very large scalability requirements. In this article we thoroughly analyze such challenges and discuss NFV design guidelines that address them efficiently. Particularly, we show that a decentralization of NFV control while maintaining global state improves scalability, offers better per-flow decisions and simplifies the implementation of virtual network functions. Building on top of such principles, we propose a partially decentralized NFV architecture enabled via an enhanced software-defined networking infrastructure. We also perform a qualitative analysis of the architecture to identify advantages and challenges. Finally, we determine the bottleneck component, based on the qualitative analysis, which we implement and benchmark in order to assess the feasibility of the architecture.Peer ReviewedPostprint (author's final draft

    Strategies and challenges to facilitate situated learning in virtual worlds post-Second Life

    Get PDF
    Virtual worlds can establish a stimulating environment to support a situated learning approach in which students simulate a task within a safe environment. While in previous years Second Life played a major role in providing such a virtual environment, there are now more and more alternative—often OpenSim-based—solutions deployed within the educational community. By drawing parallels to social networks, we discuss two aspects: how to link individually hosted virtual worlds together in order to implement context for immersion and how to identify and avoid “fake” avatars so people behind these avatars can be held accountable for their actions
    • 

    corecore