184,098 research outputs found

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure

    An Integrated Approach for Characterizing Aerosol Climate Impacts and Environmental Interactions

    Get PDF
    Aerosols exert myriad influences on the earth's environment and climate, and on human health. The complexity of aerosol-related processes requires that information gathered to improve our understanding of climate change must originate from multiple sources, and that effective strategies for data integration need to be established. While a vast array of observed and modeled data are becoming available, the aerosol research community currently lacks the necessary tools and infrastructure to reap maximum scientific benefit from these data. Spatial and temporal sampling differences among a diverse set of sensors, nonuniform data qualities, aerosol mesoscale variabilities, and difficulties in separating cloud effects are some of the challenges that need to be addressed. Maximizing the long-term benefit from these data also requires maintaining consistently well-understood accuracies as measurement approaches evolve and improve. Achieving a comprehensive understanding of how aerosol physical, chemical, and radiative processes impact the earth system can be achieved only through a multidisciplinary, inter-agency, and international initiative capable of dealing with these issues. A systematic approach, capitalizing on modern measurement and modeling techniques, geospatial statistics methodologies, and high-performance information technologies, can provide the necessary machinery to support this objective. We outline a framework for integrating and interpreting observations and models, and establishing an accurate, consistent, and cohesive long-term record, following a strategy whereby information and tools of progressively greater sophistication are incorporated as problems of increasing complexity are tackled. This concept is named the Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON). To encompass the breadth of the effort required, we present a set of recommendations dealing with data interoperability; measurement and model integration; multisensor synergy; data summarization and mining; model evaluation; calibration and validation; augmentation of surface and in situ measurements; advances in passive and active remote sensing; and design of satellite missions. Without an initiative of this nature, the scientific and policy communities will continue to struggle with understanding the quantitative impact of complex aerosol processes on regional and global climate change and air quality

    The Motivation, Architecture and Demonstration of Ultralight Network Testbed

    Get PDF
    In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Architecture and Design of Medical Processor Units for Medical Networks

    Full text link
    This paper introduces analogical and deductive methodologies for the design medical processor units (MPUs). From the study of evolution of numerous earlier processors, we derive the basis for the architecture of MPUs. These specialized processors perform unique medical functions encoded as medical operational codes (mopcs). From a pragmatic perspective, MPUs function very close to CPUs. Both processors have unique operation codes that command the hardware to perform a distinct chain of subprocesses upon operands and generate a specific result unique to the opcode and the operand(s). In medical environments, MPU decodes the mopcs and executes a series of medical sub-processes and sends out secondary commands to the medical machine. Whereas operands in a typical computer system are numerical and logical entities, the operands in medical machine are objects such as such as patients, blood samples, tissues, operating rooms, medical staff, medical bills, patient payments, etc. We follow the functional overlap between the two processes and evolve the design of medical computer systems and networks.Comment: 17 page
    • …
    corecore