343 research outputs found

    A Shibboleth-protected privilege management infrastructure for e-science education

    Get PDF
    Simplifying access to and usage of large scale compute resources via the grid is of critical importance to encourage the uptake of e-research. Security is one aspect that needs to be made as simple as possible for end users. The ESP-Grid and DyVOSE projects at the National e-Science Centre (NeSC) at the University of Glasgow are investigating security technologies which will make the end-user experience of using the grid easier and more secure. In this paper, we outline how simplified (from the user experience) authentication and authorization of users are achieved through single usernames and passwords at users' home institutions. This infrastructure, which will be applied in the second year of the grid computing module part of the advanced MSc in Computing Science at the University of Glasgow, combines grid portal technology, the Internet2 Shibboleth Federated Access Control infrastructure, and the PERMS role-based access control technology. Through this infrastructure inter-institutional teaching can be supported where secure access to federated resources is made possible between sites. A key aspect of the work we describe here is the ability to support dynamic delegation of authority whereby local/remote administrators are able to dynamically assign meaningful privileges to remote/local users respectively in a trusted manner thus allowing for the dynamic establishment of virtual organizations with fine grained security at their heart

    Deploying and Maintaining a Campus Grid at Clemson University

    Get PDF
    Many institutions have all the tools needed to create a local grid that aggregates commodity compute resources into an accessible grid service, while simultaneously maintaining user satisfaction and system security. In this thesis, the author presents a three-tiered strategy used at Clemson University to deploy and maintain a grid infrastructure by making resources available to both local and federated remote users for scientific research. Using this approach virtually no compute cycles are wasted. Usage trends and power consumption statistics collected from the Clemson campus grid are used as a reference for best-practices. The loosely-coupled components that comprise the campus grid work together to form a highly cohesive infrastructure that not only meets the computing needs of local users, but also helps to fill the needs of the scientific community at large. Experience gained from the deployment and management of this system may be adapted to other grid sites, allowing for the development of campus-wide, grid-connected cyberinfrastructures

    Extensible Terascale Facility (ETF): Indiana-Purdue Grid (IP-Grid)

    Get PDF
    NSF Award ID: ACI-0338618 Project Dates: 10/1/03-9/30/0

    Support for flexible and transparent distributed computing

    Get PDF
    Modern distributed computing developed from the traditional supercomputing community rooted firmly in the culture of batch management. Therefore, the field has been dominated by queuing-based resource managers and work flow based job submission environments where static resource demands needed be determined and reserved prior to launching executions. This has made it difficult to support resource environments (e.g. Grid, Cloud) where the available resources as well as the resource requirements of applications may be both dynamic and unpredictable. This thesis introduces a flexible execution model where the compute capacity can be adapted to fit the needs of applications as they change during execution. Resource provision in this model is based on a fine-grained, self-service approach instead of the traditional one-time, system-level model. The thesis introduces a middleware based Application Agent (AA) that provides a platform for the applications to dynamically interact and negotiate resources with the underlying resource infrastructure. We also consider the issue of transparency, i.e., hiding the provision and management of the distributed environment. This is the key to attracting public to use the technology. The AA not only replaces user-controlled process of preparing and executing an application with a transparent software-controlled process, it also hides the complexity of selecting right resources to ensure execution QoS. This service is provided by an On-line Feedback-based Automatic Resource Configuration (OAC) mechanism cooperating with the flexible execution model. The AA constantly monitors utility-based feedbacks from the application during execution and thus is able to learn its behaviour and resource characteristics. This allows it to automatically compose the most efficient execution environment on the fly and satisfy any execution requirements defined by users. Two policies are introduced to supervise the information learning and resource tuning in the OAC. The Utility Classification policy classifies hosts according to their historical performance contributions to the application. According to this classification, the AA chooses high utility hosts and withdraws low utility hosts to configure an optimum environment. The Desired Processing Power Estimation (DPPE) policy dynamically configures the execution environment according to the estimated desired total processing power needed to satisfy users’ execution requirements. Through the introducing of flexibility and transparency, a user is able to run a dynamic/normal distributed application anywhere with optimised execution performance, without managing distributed resources. Based on the standalone model, the thesis further introduces a federated resource negotiation framework as a step forward towards an autonomous multi-user distributed computing world

    Teaching high-performance service in a cluster computing course

    Full text link
    [EN] Most courses on cluster computing in graduate and postgraduate studies are focused on parallel programming and high-performance/high-throughput computing. This is the typical usage of clusters in academia and research centres. However, nowadays, many companies are providing web, mail and, in general, Internet services using computer clusters. These services require a different ``cluster flavour'': high-performance service and high availability. Despite the fact that computer clusters for each environment demand a different configuration, most university cluster computing courses keep focusing only on high-performance computing, ignoring other possibilities. In this paper, we propose several teaching strategies for a course on cluster computing that could fill this gap. The content developed here would be taught as a part of the course. The subject shows several strategies about how to configure, test and evaluate a high-availability/load-balanced Internet server. A virtualization-based platform is used to build a cluster prototype, using Linux as its operating system. Evaluation of the course shows that students knowledge and skills on the subject are improved at the end of the course. On the other hand, regarding the teaching methodology, the results obtained in the yearly survey of the University confirm student satisfaction.This work was supported in part by the Spanish Ministerio de Economia y Competitividad (MINECO) and by FEDER funds under Grant TIN2015-66972-C5-1-R.López Rodríguez, PJ.; Baydal Cardona, ME. (2018). Teaching high-performance service in a cluster computing course. Journal of Parallel and Distributed Computing. 117:138-147. https://doi.org/10.1016/j.jpdc.2018.02.027S13814711

    Workshop Report: Campus Bridging: Reducing Obstacles on the Path to Big Answers 2015

    Get PDF
    For the researcher whose experiments require large-scale cyberinfrastructure, there exists significant challenges to successful completion. These challenges are broad and go far beyond the simple issue that there are not enough large-scale resources available; these solvable issues range from a lack of documentation written for a non-technical audience to a need for greater consistency with regard to system configuration and consistent software configuration and availability on the large-scale resources at national tier supercomputing centers, with a number of other challenges existing alongside the ones mentioned here. Campus Bridging is a relatively young discipline that aims to mitigate these issues for the academic end-user, for whom the entire process can feel like a path comprised entirely of obstacles. The solutions to these problems must by necessity include multiple approaches, with focus not only on the end user but on the system administrators responsible for supporting these resources as well as the systems themselves. These system resources include not only those at the supercomputing centers but also those that exist at the campus or departmental level and even on the personal computing devices the researcher uses to complete his or her work. This workshop report compiles the results of a half-day workshop, held in conjunction with IEEE Cluster 2015 in Chicago, IL.NSF XSED

    2012 XSEDE User Satisfaction Survey

    Get PDF
    This is the final report from the 2012 XSEDE User Satisfaction Survey.National Science Foundation OCI-1053575Ope

    On a course on computer cluster configuration and administration

    Full text link
    [EN] Computer clusters are today a cost-effective way of providing either high-performance and/or high-availability. The flexibility of their configuration aims to fit the needs of multiple environments, from small servers to SME and large Internet servers. For these reasons, their usage has expanded not only in academia but also in many companies. However, each environment needs a different ¿cluster flavour¿. High-performance and high-throughput computing are required in universities and research centres while high-performance service and high-availability are usually reserved to use in companies. Despite this fact, most university cluster computing courses continue to cover only high-performance computing, usually ignoring other possibilities. In this paper, a master-level course which attempts to fill this gap is discussed. It explores the different types of cluster computing as well as their functional basis, from a very practical point of view. As part of the teaching methodology, each student builds from scratch a computer cluster based on a virtualization tool. The entire process is designed to be scalable. The goal is to be able to apply it to an actual computer cluster with a larger number of nodes, such as those the students may subsequently encounter in their professional life.This work was supported in part by the Spanish Ministerio de Economia y Competitividad (MINECO) and by FEDER funds under Grant TIN2015-66972-C5-1-R.López Rodríguez, PJ.; Baydal Cardona, ME. (2017). On a course on computer cluster configuration and administration. Journal of Parallel and Distributed Computing. 105:127-137. https://doi.org/10.1016/j.jpdc.2017.01.009S12713710
    corecore