48,138 research outputs found

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Handling Confidential Data on the Untrusted Cloud: An Agent-based Approach

    Get PDF
    Cloud computing allows shared computer and storage facilities to be used by a multitude of clients. While cloud management is centralized, the information resides in the cloud and information sharing can be implemented via off-the-shelf techniques for multiuser databases. Users, however, are very diffident for not having full control over their sensitive data. Untrusted database-as-a-server techniques are neither readily extendable to the cloud environment nor easily understandable by non-technical users. To solve this problem, we present an approach where agents share reserved data in a secure manner by the use of simple grant-and-revoke permissions on shared data.Comment: 7 pages, 9 figures, Cloud Computing 201

    London SynEx Demonstrator Site: Impact Assessment Report

    Get PDF
    The key ingredients of the SynEx-UCL software components are: 1. A comprehensive and federated electronic healthcare record that can be used to reference or to store all of the necessary healthcare information acquired from a diverse range of clinical databases and patient-held devices. 2. A directory service component to provide a core persons demographic database to search for and authenticate staff users of the system and to anchor patient identification and connection to their federated healthcare record. 3. A clinical record schema management tool (Object Dictionary Client) that enables clinicians or engineers to define and export the data sets mapping to individual feeder systems. 4. An expansible set of clinical management algorithms that provide prompts to the patient or clinician to assist in the management of patient care. CHIME has built up over a decade of experience within Europe on the requirements and information models that are needed to underpin comprehensive multiprofessional electronic healthcare records. The resulting architecture models have influenced new European standards in this area, and CHIME has designed and built prototype EHCR components based on these models. The demonstrator systems described here utilise a directory service and object-oriented engineering approach, and support the secure, mobile and distributed access to federated healthcare records via web-based services. The design and implementation of these software components has been founded on a thorough analysis of the clinical, technical and ethico-legal requirements for comprehensive EHCR systems, published through previous project deliverables and in future planned papers. The clinical demonstrator site described in this report has provided the solid basis from which to establish "proof of concept" verification of the design approach, and a valuable opportunity to install, test and evaluate the results of the component engineering undertaken during the EC funded project. Inevitably, a number of practical implementation and deployment obstacles have been overcome through this journey, each of those having contributed to the time taken to deliver the components but also to the richness of the end products. UCL is fortunate that the Whittington Hospital, and the department of cardiovascular medicine in particular, is committed to a long-term vision built around this work. That vision, outlined within this report, is shared by the Camden and Islington Health Authority and by many other purchaser and provider organisations in the area, and by a number of industrial parties. They are collectively determined to support the Demonstrator Site as an ongoing project well beyond the life of the EC SynEx Project. This report, although a final report as far as the EC project is concerned, is really a description of the first phase in establishing a centre of healthcare excellence. New EC Fifth Framework project funding has already been approved to enable new and innovative technology solutions to be added to the work already established in north London

    Grid Databases for Shared Image Analysis in the MammoGrid Project

    Full text link
    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UKComment: 10 pages, 5 figure
    • …
    corecore