5,620 research outputs found

    AliEnFS - a Linux File System for the AliEn Grid Services

    Full text link
    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. alienfsalienfs (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual File System Switch) to communicate via a generalised file system interface to the AliEn file system daemon. The AliEn framework is used for authentication, catalogue browsing, file registration and read/write transfer operations. A C++ API implements the generic file system operations. The goal of AliEnFS is to allow users easy interactive access to a worldwide distributed virtual file system using familiar shell commands (f.e. cp,ls,rm ...) The paper discusses general aspects of Grid File Systems, the AliEn implementation and present and future developments for the AliEn Grid File System.Comment: 9 pages, 12 figure

    A protocol reconfiguration and optimization system for MPI

    Get PDF
    Modern high performance computing (HPC) applications, for example adaptive mesh refinement and multi-physics codes, have dynamic communication characteristics which result in poor performance on current Message Passing Interface (MPI) implementations. The degraded application performance can be attributed to a mismatch between changing application requirements and static communication library functionality. To improve the performance of these applications, MPI libraries should change their protocol functionality in response to changing application requirements, and tailor their functionality to take advantage of hardware capabilities. This dissertation describes Protocol Reconfiguration and Optimization system for MPI (PRO-MPI), a framework for constructing profile-driven reconfigurable MPI libraries; these libraries use past application characteristics (profiles) to dynamically change their functionality to match the changing application requirements. The framework addresses the challenges of designing and implementing the reconfigurable MPI libraries, which include collecting and reasoning about application characteristics to drive the protocol reconfiguration and defining abstractions required for implementing these reconfigurations. Two prototype reconfigurable MPI implementations based on the framework - Open PRO-MPI and Cactus PRO-MPI - are also presented to demonstrate the utility of the framework. To demonstrate the effectiveness of reconfigurable MPI libraries, this dissertation presents experimental results to show the impact of using these libraries on the application performance. The results show that PRO-MPI improves the performance of important HPC applications and benchmarks. They also show that HyperCLaw performance improves by approximately 22% when exact profiles are available, and HyperCLaw performance improves by approximately 18% when only approximate profiles are available

    The Motivation, Architecture and Demonstration of Ultralight Network Testbed

    Get PDF
    In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis

    The Design and Demonstration of the Ultralight Testbed

    Get PDF
    In this paper we present the motivation, the design, and a recent demonstration of the UltraLight testbed at SC|05. The goal of the Ultralight testbed is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network- focused approach. UltraLight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. To achieve its goal we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we will first present early results in the various working areas of the project. We then describe our experiences of the network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many Grid computing sites

    Methods and design issues for next generation network-aware applications

    Get PDF
    Networks are becoming an essential component of modern cyberinfrastructure and this work describes methods of designing distributed applications for high-speed networks to improve application scalability, performance and capabilities. As the amount of data generated by scientific applications continues to grow, to be able to handle and process it, applications should be designed to use parallel, distributed resources and high-speed networks. For scalable application design developers should move away from the current component-based approach and implement instead an integrated, non-layered architecture where applications can use specialized low-level interfaces. The main focus of this research is on interactive, collaborative visualization of large datasets. This work describes how a visualization application can be improved through using distributed resources and high-speed network links to interactively visualize tens of gigabytes of data and handle terabyte datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps (up to 30 times faster than local I/O). Motivated by the distributed visualization application, this work also researches remote data access systems. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system. Transport protocols are compared to understand which protocol can best utilize high-speed network connections, concluding that a rate-based protocol is the best solution, being 8 times faster than standard TCP. An HD-based remote teaching application experiment is conducted, illustrating the potential of network-aware applications in a production environment. Future research areas are presented, with emphasis on network-aware optimization, execution and deployment scenarios

    EC-IoT : an easy configuration framework for constrained IoT devices

    Get PDF
    Connected devices offer tremendous opportunities. However, their configuration and control remains a major challenge in order to reach widespread adoption by less technically skilled people. Over the past few years, a lot of attention has been given to improve the configuration process of constrained devices with limited resources, such as available memory and absence of a user interface. Still, a major deficiency is the lack of a streamlined, standardized configuration process. In this paper we propose EC-IoT, a novel configuration framework for constrained IoT devices. The proposed framework makes use of open standards, leveraging upon the Constrained Application Protocol (CoAP), an application protocol that enables HTTP-like RESTful interactions with constrained devices. To validate the proposed approach, we present a prototype implementation of the EC-IoT framework and assess its scalability.The research from DEWI project (www.dewi-project.eu) leading to these results has received funding from the ARTEMIS Joint Undertaking under grant agreement n 621353 and from the agency for Flanders Innovation & Entrepreneurship (VLAIO). The research from the ITEA2 FUSE-IT project (13023) leading to these results has re- ceived funding from the agency for Flanders Innovation & Entrepreneurship (VLAIO)

    Network security mechanisms and implementations for the next generation reliable fast data transfer protocol - UDT

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.TCP protocol variants (such as FAST, BiC, XCP, Scalable and High Speed) have demonstrated improved performance in simulation and in several limited network experiments. However, practical use of these protocols is still very limited because of implementation and installation difficulties. Users who require to transfer bulk data (e.g., in Cloud/GRID computing) usually turn to application level solutions where these variants do not fair well. Among protocols considered in the application level are User Datagram Protocol (UDP)-based protocols, such as UDT (UDP-based Data Transport Protocol). UDT is one of the most recently developed new transport protocols with congestion control algorithms. It was developed to support next generation high-speed networks, including wide area optical networks. It is considered a state-of-the-art protocol, addressing infrastructure requirements for transmitting data in high-speed networks. Its development, however, creates new vulnerabilities because like many other protocols, it relies solely on the existing security mechanisms for current protocols such as the Transmission Control Protocol (TCP) and UDP. Certainly, both UDT and the decades-old TCP/UDP lack a well-thought-out security architecture that addresses problems in today’s networks. In this dissertation, we focus on investigating UDT security issues and offer important contributions to the field of network security. The choice of UDT is significant for several reasons: UDT as a newly designed next generation protocol is considered one of the most promising and fastest protocols ever created that operates on top of the UDP protocol. It is a reliable UDP-based application-level data-transport protocol intended for distributing data intensive applications over wide area high-speed networks. It can transfer data in a highly configurable framework and can accommodate various congestion control algorithms. Its proven success at transferring terabytes of data gathered from outer space across long distances is a testament to its significant commercial promise. In this work, our objective is to examine a range of security methods used on existing mature protocols such as TCP and UDP and evaluate their viability for UDT. We highlight the security limitations of UDT and determine the threshold of feasible security schemes within the constraints under which UDT was designed and developed. Subsequently, we provide ways of securing applications and traffic using UDT protocol, and offer recommendations for securing UDT. We create security mechanisms tailored for UDT and propose a new security architecture that can assist network designers, security investigators, and users who want to incorporate security when implementing UDT across wide area networks. We then conduct practical experiments on UDT using our security mechanisms and explore the use of other existing security mechanisms used on TCP/UDP for UDT. To analyse the security mechanisms, we carry out a formal proof of correctness to assist us in determining their applicability by using Protocol Composition Logic (PCL). This approach is modular, comprising a separate proof of each protocol section and providing insight into the network environment in which each section can be reliably employed. Moreover, the proof holds for a variety of failure recovery strategies and other implementation and configuration options. We derive our technique from the PCL on TLS and Kerberos in the literature. We maintain, however, the novelty of our work for UDT particularly our newly developed mechanisms such as UDT-AO, UDT-DTLS, UDT-Kerberos (GSS-API) specifically for UDT, which all now form our proposed UDT security architecture. We further analyse this architecture using rewrite systems and automata. We outline and use symbolic analysis approach to effectively verify our proposed architecture. This approach allows dataflow replication in the implementation of selected mechanisms that are integrated into the proposed architecture. We consider this approach effective by utilising the properties of the rewrite systems to represent specific flows within the architecture to present a theoretical and reliable method to perform the analysis. We introduce abstract representations of the components that compose the architecture and conduct our investigation, through structural, semantics and query analyses. The result of this work, which is first in the literature, is a more robust theoretical and practical representation of a security architecture of UDT, viable to work with other high speed network protocols
    • …
    corecore