7,467 research outputs found

    A lightweight Web interface to Grid scheduling systems

    Get PDF
    Grid computing is often out of reach for the very scientists who need these resources because of the complexity of popular middleware suites. Some effort has gone into abstracting away these complexities using graphical user interfaces, some of which have been Web-based. This paper presents a lightweight and portable interface for Grid management, that is made possible using recent advances in dynamic technologies for Web applications. Case studies are presented to demonstrate that this interface is both usable and useful. An analysis of usage then highlights some positive and negative aspects of this approach

    A lightweight interface to local Grid scheduling systems

    Get PDF
    Many complex research problems require an immense amount of computational power to solve. In order to solve such problems, the concept of the computational Grid was conceived. Although Grid technology is hailed as the next great enabling technology in Computer Science, the last being the inception of the World Wide Web, some concerns have to be addressed if this technology is going to be successful. The main difference between the Web and the Grid in terms of adoption is usability. The Web was designed with both functionality and end-users in mind, whereas the Grid has been designed solely with functionality in mind. Although large Grid installations are operational around the globe, their use is restricted to those who have an in-depth knowledge of its complex architecture and functionality. Such technology is therefore out of reach for the very scientists who need these resources because of its sheer complexity. The Grid is likely to succeed as a tool for some large-scale problem solving as there is no alternative on a similar scale. However, in order to integrate such systems into our daily lives, just as the Web has been, such systems need to be accessible to ``novice'' users. Without such accessibility, the use and growth of such systems will remain constrained. This dissertation details one possible way of making the Grid more accessible, by providing high-level access to the scheduling systems on which Grids rely. Since ``the Grid'' is a mechanism of transferring control of user submitted jobs to third-party scheduling systems, high-level access to the schedulers themselves was deemed to be a natural place to begin usability enhancing efforts. In order to design a highly usable and intuitive interface to a Grid scheduling system, a series of interviews with scientists were conducted in order to gain insight into the way in which supercomputing systems are utilised. Once this data was gathered, a paper-based prototype system was developed. This prototype was then evaluated by a group of test subjects who set out to criticise the interface and make suggestions as to where it could be improved. Based on this new data, the final prototype was developed firstly on paper and then implemented in software. The implementation makes use of lightweight Web 2.0 technologies. Designing lightweight software allows one to make use of the dynamic properties of Web technologies and thereby create more usable interfaces that are also visually appealing. Finally, the system was once again evaluated by another group of test subjects. In addition to user evaluations, performance experiments and real-world case studies were carried out on the interface. This research concluded that a dynamic Web 2.0-inspired interface appeals to a large group of users and allows for greater flexibility in the way in which data, in this case technical data, is presented. In terms of usability- the focal point of this research- it was found that it is possible to build an interface to a Grid scheduling system that can be used by users with no technical Grid knowledge. This is a significant outcome, as users were able to submit jobs to a Grid without fully comprehending the complexities involved with such actions, yet understanding the task they were required to perform. Finally, it was found that the use of a lightweight approach in terms of bandwidth usage and response time is superior to the traditional HTML-only approach. In this particular implementation of the interface, the benefits of using a lightweight approach are realised approximately halfway through a typical Grid job submission cycle

    A Lightweight Interface to Local Grid Scheduling Systems

    Get PDF
    Many computationally intensive research problems can be addressed using a Grid architecture. However, Grid use is restricted to those who have an in-depth knowledge of its complex architecture and functionality. To make Grid computing more accessible, a lightweight Web 2.0 interface to the scheduling systems on which Grids rely, and which can serve as an abstraction of a large Grid environment, was built. The purpose of this interface was to simplify many of the complexities associated with using Grid architectures. A case study is used to demonstrate the applicability of the interface to a problem that can be solved using a Grid, while a user study demonstrates how users, with little or no experience using Grids, were able to accomplish tasks using the Grid. Lastly, it is shown that the Web 2.0 interface can outperform traditional static interfaces in terms of response time and bandwidth efficiency

    PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies

    Full text link
    The current landscape of scientific research is widely based on modeling and simulation, typically with complexity in the simulation's flow of execution and parameterization properties. Execution flows are not necessarily straightforward since they may need multiple processing tasks and iterations. Furthermore, parameter and performance studies are common approaches used to characterize a simulation, often requiring traversal of a large parameter space. High-performance computers offer practical resources at the expense of users handling the setup, submission, and management of jobs. This work presents the design of PaPaS, a portable, lightweight, and generic workflow framework for conducting parallel parameter and performance studies. Workflows are defined using parameter files based on keyword-value pairs syntax, thus removing from the user the overhead of creating complex scripts to manage the workflow. A parameter set consists of any combination of environment variables, files, partial file contents, and command line arguments. PaPaS is being developed in Python 3 with support for distributed parallelization using SSH, batch systems, and C++ MPI. The PaPaS framework will run as user processes, and can be used in single/multi-node and multi-tenant computing systems. An example simulation using the BehaviorSpace tool from NetLogo and a matrix multiply using OpenMP are presented as parameter and performance studies, respectively. The results demonstrate that the PaPaS framework offers a simple method for defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    A Distributed Economics-based Infrastructure for Utility Computing

    Full text link
    Existing attempts at utility computing revolve around two approaches. The first consists of proprietary solutions involving renting time on dedicated utility computing machines. The second requires the use of heavy, monolithic applications that are difficult to deploy, maintain, and use. We propose a distributed, community-oriented approach to utility computing. Our approach provides an infrastructure built on Web Services in which modular components are combined to create a seemingly simple, yet powerful system. The community-oriented nature generates an economic environment which results in fair transactions between consumers and providers of computing cycles while simultaneously encouraging improvements in the infrastructure of the computational grid itself.Comment: 8 pages, 1 figur

    The OMII Software – Demonstrations and Comparisons between two different deployments for Client-Server Distributed Systems

    No full text
    This paper describes the key elements of the OMII software and the scenarios which OMII software can be deployed to achieve distributed computing in the UK e-Science Community, where two different deployments for Client-Server distributed systems are demonstrated. Scenarios and experiments for each deployment have been described, with its advantages and disadvantages compared and analyzed. We conclude that our first deployment is more relevant for system administrators or developers, and the second deployment is more suitable for users’ perspective which they can send and check job status for hundred job submissions

    funcX: A Federated Function Serving Fabric for Science

    Full text link
    Exploding data volumes and velocities, new computational methods and platforms, and ubiquitous connectivity demand new approaches to computation in the sciences. These new approaches must enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), be offloaded to specialized accelerators, or run remotely where resources are available. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. To address these needs we present funcX---a distributed function as a service (FaaS) platform that enables flexible, scalable, and high performance remote function execution. funcX's endpoint software can transform existing clouds, clusters, and supercomputers into function serving systems, while funcX's cloud-hosted service provides transparent, secure, and reliable function execution across a federated ecosystem of endpoints. We motivate the need for funcX with several scientific case studies, present our prototype design and implementation, show optimizations that deliver throughput in excess of 1 million functions per second, and demonstrate, via experiments on two supercomputers, that funcX can scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap with arXiv:1908.0490
    • …
    corecore