226 research outputs found

    GridFTP: Protocol Extensions to FTP for the Grid

    Get PDF
    GridFTP: Protocol Extensions to FTP for the Gri

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Failure-awareness and dynamic adaptation in data scheduling

    Get PDF
    Over the years, scientific applications have become more complex and more data intensive. Especially large scale simulations and scientific experiments in areas such as physics, biology, astronomy and earth sciences demand highly distributed resources to satisfy excessive computational requirements. Increasing data requirements and the distributed nature of the resources made I/O the major bottleneck for end-to-end application performance. Existing systems fail to address issues such as reliability, scalability, and efficiency in dealing with wide area data access, retrieval and processing. In this study, we explore data-intensive distributed computing and study challenges in data placement in distributed environments. After analyzing different application scenarios, we develop new data scheduling methodologies and the key attributes for reliability, adaptability and performance optimization of distributed data placement tasks. Inspired by techniques used in microprocessor and operating system architectures, we extend and adapt some of the known low-level data handling and optimization techniques to distributed computing. Two major contributions of this work include (i) a failure-aware data placement paradigm for increased fault-tolerance, and (ii) adaptive scheduling of data placement tasks for improved end-to-end performance. The failure-aware data placement includes early error detection, error classification, and use of this information in scheduling decisions for the prevention of and recovery from possible future errors. The adaptive scheduling approach includes dynamically tuning data transfer parameters over wide area networks for efficient utilization of available network capacity and optimized end-to-end data transfer performance

    mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    Get PDF
    BACKGROUND: Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. RESULTS: mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. CONCLUSION: Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet

    Policies for Web Services

    Get PDF
    Web services are predominantly used to implement service-oriented architectures (SOA). However, there are several areas such as temporal dimensions, real-time, streaming, or efficient and flexible file transfers where web service functionality should be extended. These extensions can, for example, be achieved by using policies. Since there are often alternative solutions to provide functionality (e.g., different protocols can be used to achieve the transfer of data), the WS-Policy standard is especially useful to extend web services with policies. It allows to create policies to generally state the properties under which a service is provided and to explicitly express alternative properties. To extend the functionality of web services, two policies are introduced in this thesis: the Temporal Policy and the Communication Policy. The temporal policy is the foundation for adding temporal dimensions to a WS-Policy. The temporal policy itself is not a WS-Policy but an independent policy language that describes temporal dimensions of and dependencies between temporal policies and WS-Policies. Switching of protocol dependencies, pricing of services, quality of service, and security are example areas for using a temporal policy. To describe protocol dependencies of a service for streaming, real-time and file transfers, a communication policy can be utilized. The communication policy is a concrete WS-Policy. With the communication policy, a service can expose the protocols it depends on for a communication after its invocation. Thus, a web service client knows the protocols required to support a communication with the service. Therefore, it is possible to evaluate beforehand whether an invocation of a service is reasonable. On top of the newly introduced policies, novel mechanisms and tools are provided to alleviate service use and enable flexible and efficient data handling. Furthermore, the involvement of the end user in the development process can be achieved more easily. The Flex-SwA architecture, the first component in this thesis based on the newly introduced policies, implements the actual file transfers and streaming protocols that are described as dependencies in a communication policy. Several communication patterns support the flexible handling of the communication. A reference concept enables seamless message forwarding with reduced data movement. Based on the Flex-SwA implementation and the communication policy, it is possible to improve usability - especially in the area of service-oriented Grids - by integrating data transfers into an automatically generated web and Grid service client. The Web and Grid Service Browser is introduced in this thesis as such a generic client. It provides a familiar environment for using services by offering the client generation as part of the browser. Data transfers are directly integrated into service invocation without having to perform data transmissions explicitly. For multimedia MIME types, special plugins allow the consumption of multimedia data. To enable an end user to build applications that also leverage high performance computing resources, the Service-enabled Mashup Editor is presented that lets the user combine popular web applications with web and Grid services. Again, the communication policy provides descriptive means for file transfers and Flex-SwAs reference concept is used for data exchange. To show the applicability of these novel concepts, several use cases from the area of multimedia processing have been selected. Based on the temporal policy, the communication policy, Flex-SwA, the Web and Grid Service Browser, and the Service-enabled Mashup Editor, the development of a scalable service-oriented multimedia architecture is presented. The multimedia SOA offers, among others, a face detection workflow, a video-on-demand service, and an audio resynthesis service. More precisely, a video-on-demand service describes its dependency on a multicast protocol by using a communication policy. A temporal policy is then used to perform the description of a protocol switch from one multicast protocol to another one by changing the communication policy at the end of its validity period. The Service-enabled Mashup Editor is used as a client for the new multicast protocol after the multicast protocol has been switched. To stream single frames from a frame decoder service to a face detection service (which are both part of the face detection workflow) and to transfer audio files with the different Flex-SwA communication patterns to an audio resynthesis service, Flex-SwA is used. The invocation of the face detection workflow and the audio resynthesis service is realized with the Web and Grid Service Browser

    Design and implementation of the fast send protocol

    Get PDF
    Over the last decades Internet traffic has grown dramatically. Besides the number of transfers, data sizes have risen as well. Traditional transfer protocols do not adapt to this evolution. Large-scale computational applications running on expensive parallel computers produce large amounts of data which often have to be transferred to weaker machines at the clients' premises. As parallel computers are frequently charged by the minute, it is indispensable to minimize the transfer time after computation succeeded to keep down costs. Consequently, the economic focus lies on minimizing the time to move away all data from the parallel computer whereas the actual time to arrival remains less (but still) important. This paper describes the design and implementation of a new transfer protocol, the Fast Send Protocol (FSP), which employs striping to intermediate nodes in order to minimize sending time and to utilize the sender's resources to a high extent

    ARC Computing Element System Administrator Guide

    Get PDF
    The ARC Computing Element (CE) is an EMI product allowing submission and management of applications running on DCI computational resourc

    STCP: A New Transport Protocol for High-Speed Networks

    Get PDF
    Transmission Control Protocol (TCP) is the dominant transport protocol today and likely to be adopted in future high‐speed and optical networks. A number of literature works have been done to modify or tune the Additive Increase Multiplicative Decrease (AIMD) principle in TCP to enhance the network performance. In this work, to efficiently take advantage of the available high bandwidth from the high‐speed and optical infrastructures, we propose a Stratified TCP (STCP) employing parallel virtual transmission layers in high‐speed networks. In this technique, the AIMD principle of TCP is modified to make more aggressive and efficient probing of the available link bandwidth, which in turn increases the performance. Simulation results show that STCP offers a considerable improvement in performance when compared with other TCP variants such as the conventional TCP protocol and Layered TCP (LTCP)

    Long-Term Neuroadaptations Produced by Withdrawal from Repeated Cocaine Treatment: Role of Dopaminergic Receptors in Modulating Cortical Excitability

    Get PDF
    Dopamine (DA) modulates neuronal activity in the prefrontal cortex (PFC) and is necessary for optimal cognitive function. Dopamine transmission in the PFC is also important for the behavioral adaptations produced by repeated exposure to cocaine. Therefore, we investigated the effects of repeated cocaine treatment followed by withdrawal (2– 4 weeks) on the responsivity of cortical cells to electrical stimulation of the ventral tegmental area (VTA) and to systemic administration of DA D1 or D2 receptor antagonists. Cortical cells in cocaine- and saline-treated animals exhibited a similar decrease in excitability after the administration of D1 receptor antagonists. In contrast, cortical neurons from cocaine-treated rats exhibited a lack of D2-mediated regulation relative to saline rats. Furthermore, in contrast to saline-treated animals, VTA stimulation did not increase cortical excitability in the cocaine group. These data suggest that withdrawal from repeated cocaine administration elicits some long-term neuroadaptations in the PFC, including (1) reduced D2-mediated regulation of cortical excitability, (2) reduced responsivity of cortical cells to phasic increases in DA, and (3) a trend toward an overall decrease in excitability of PFC neurons

    The Open Grid Computing Environments collaboration: portlets and services for science gateways

    Full text link
    We review the efforts of the Open Grid Computing Environments collaboration. By adopting a general three-tiered architecture based on common standards for portlets and Grid Web services, we can deliver numerous capabilities to science gateways from our diverse constituent efforts. In this paper, we discuss our support for standards-based Grid portlets using the Velocity development environment. Our Grid portlets are based on abstraction layers provided by the Java CoG kit, which hide the differences of different Grid toolkits. Sophisticated services are decoupled from the portal container using Web service strategies. We describe advance information, semantic data, collaboration, and science application services developed by our consortium. Copyright © 2006 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/56029/1/1078_ftp.pd
    corecore