4,056 research outputs found

    Cloud Bioinformatics in a private cloud deployment

    No full text

    The Database Query Support Processor (QSP)

    Get PDF
    The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide quantitative data on the amount of effort required to implement an extended data dictionary at the network level, add new systems, adapt to changing user needs, and provide sound estimates on operations and maintenance costs and savings

    Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research

    No full text
    This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) are performed and results confirm backup automation is completed swiftly and is reliable for data-intensive research. The data recovery result confirms that execution time is in proportion to quantity of recovered data, but the failure rate increases in an exponential manner. The data migration result confirms execution time is in proportion to disk volume of migrated data, but again the failure rate increases in an exponential manner. In addition, benefits of CCAF are illustrated using several bioinformatics examples such as tumour modelling, brain imaging, insulin molecules and simulations for medical training. Our Cloud Storage solution described here offers cost reduction, time-saving and user friendliness

    Test, Control and Monitor System (TCMS) operations plan

    Get PDF
    The purpose is to provide a clear understanding of the Test, Control and Monitor System (TCMS) operating environment and to describe the method of operations for TCMS. TCMS is a complex and sophisticated checkout system focused on support of the Space Station Freedom Program (SSFP) and related activities. An understanding of the TCMS operating environment is provided and operational responsibilities are defined. NASA and the Payload Ground Operations Contractor (PGOC) will use it as a guide to manage the operation of the TCMS computer systems and associated networks and workstations. All TCMS operational functions are examined. Other plans and detailed operating procedures relating to an individual operational function are referenced within this plan. This plan augments existing Technical Support Management Directives (TSMD's), Standard Practices, and other management documentation which will be followed where applicable

    A Guide to Distributed Digital Preservation

    Get PDF
    This volume is devoted to the broad topic of distributed digital preservation, a still-emerging field of practice for the cultural memory arena. Replication and distribution hold out the promise of indefinite preservation of materials without degradation, but establishing effective organizational and technical processes to enable this form of digital preservation is daunting. Institutions need practical examples of how this task can be accomplished in manageable, low-cost ways."--P. [4] of cove

    Proposal for an IMLS Collection Registry and Metadata Repository

    Get PDF
    The University of Illinois at Urbana-Champaign proposes to design, implement, and research a collection-level registry and item-level metadata repository service that will aggregate information about digital collections and items of digital content created using funds from Institute of Museum and Library Services (IMLS) National Leadership Grants. This work will be a collaboration by the University Library and the Graduate School of Library and Information Science. All extant digital collections initiated or augmented under IMLS aegis from 1998 through September 30, 2005 will be included in the proposed collection registry. Item-level metadata will be harvested from collections making such content available using the Open Archives Initiative Protocol for Metadata Harvesting (OAI PMH). As part of this work, project personnel, in cooperation with IMLS staff and grantees, will define and document appropriate metadata schemas, help create and maintain collection-level metadata records, assist in implementing OAI compliant metadata provider services for dissemination of item-level metadata records, and research potential benefits and issues associated with these activities. The immediate outcomes of this work will be the practical demonstration of technologies that have the potential to enhance the visibility of IMLS funded online exhibits and digital library collections and improve discoverability of items contained in these resources. Experience gained and research conducted during this project will make clearer both the costs and the potential benefits associated with such services. Metadata provider and harvesting service implementations will be appropriately instrumented (e.g., customized anonymous transaction logs, online questionnaires for targeted user groups, performance monitors). At the conclusion of this project we will submit a final report that discusses tasks performed and lessons learned, presents business plans for sustaining registry and repository services, enumerates and summarizes potential benefits of these services, and makes recommendations regarding future implementations of these and related intermediary and end user interoperability services by IMLS projects.unpublishednot peer reviewe

    Developing a Cloud Computing Framework for University Libraries

    Get PDF
    Our understanding of the library context on security challenges on storing research output on the cloud is inadequate and incomplete. Existing research has mostly focused on profit-oriented organizations. To address the limitation within the university environment, the paper unravels the data/information security concerns of cloud storage services within the university libraries. On the score of changes occurring in the libraries, this paper serves to inform users and library managers of the traditional approaches that have not guaranteed the security of research output. The paper is built upon the work of Shaw and the cloud storage security framework, which links aspects of cloud security and helps explain reasons for university libraries moving research output into cloud infrastructure, and how the cloud service is more secured. Specifically, this paper examined the existing storage carriers/media for storing research output and the associated risks with cloud storage services for university libraries. The paper partly fills this gap by a case study examination of two (2) African countries’ (Ghana and Uganda) reports on research output and cloud storage security in university libraries. The paper argues that in storing university research output on the cloud, libraries consider the security of content, the resilience of librarians, determining access levels and enterprise cloud storage platforms. The interview instrument is used to collect qualitative data from librarians and the thematic content analysis is used to analyze the research data. Significantly, results show that copyright law infringement, unauthorized data accessibility, policy issues, insecurity of content, cost and no interoperable cloud standards were major risks associated with cloud storage services. It is expected that university libraries pay more attention to the security/confidentiality of content, the resilience of librarians, determining access levels and enterprise cloud storage platforms to enhance cloud security of research output. The paper contributes to the field of knowledge by developing a framework that supports an approach to understand security in cloud storage. It also enables actors in the library profession to understand the makeup and measures of security issues in cloud storage. By presenting empirical evidence, it is clear that university libraries have migrated research output into cloud infrastructure as an alternative for continued storage, maintenance and access of information

    Decision Support Tools for Cloud Migration in the Enterprise

    Full text link
    This paper describes two tools that aim to support decision making during the migration of IT systems to the cloud. The first is a modeling tool that produces cost estimates of using public IaaS clouds. The tool enables IT architects to model their applications, data and infrastructure requirements in addition to their computational resource usage patterns. The tool can be used to compare the cost of different cloud providers, deployment options and usage scenarios. The second tool is a spreadsheet that outlines the benefits and risks of using IaaS clouds from an enterprise perspective; this tool provides a starting point for risk assessment. Two case studies were used to evaluate the tools. The tools were useful as they informed decision makers about the costs, benefits and risks of using the cloud.Comment: To appear in IEEE CLOUD 201

    Managing the outsourcing of information security processes: the 'cloud' solution

    Get PDF
    Information security processes and systems are relevant for any organization and involve medium-to-high investment; however, the current economic downturn is causing a dramatic reduction in spending on Information Technology (IT). Cloud computing (i.e., externalization of one or more IT services) might be a solution for organizations keen to maintain a good level of security. In this paper we discuss whether cloud computing is a valid alternative to in-house security processes and systems drawing on four mini-case studies of higher education institutions in New England, US. Our findings show that the organization’s IT spending capacity affects the choice to move to the cloud; however, the perceived security of the cloud and the perceived in-house capacity to provide high quality IT (and security) services moderate this relationship. Moreover, other variables such as (low) quality of technical support, relatively incomplete contracts, poor defined Service License Agreements (SLA), and ambiguities over data ownership affect the choice to outsource IT (and security) using the cloud. We suggest that, while cloud computing could be a useful means of IT outsourcing, there needs to be a number of changes and improvements to how the service is currently delivered

    The Dark Energy Survey Data Management System

    Full text link
    The Dark Energy Survey collaboration will study cosmic acceleration with a 5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The DES data management (DESDM) system will be used to process and archive these data and the resulting science ready data products. The DESDM system consists of an integrated archive, a processing framework, an ensemble of astronomy codes and a data access framework. We are developing the DESDM system for operation in the high performance computing (HPC) environments at NCSA and Fermilab. Operating the DESDM system in an HPC environment offers both speed and flexibility. We will employ it for our regular nightly processing needs, and for more compute-intensive tasks such as large scale image coaddition campaigns, extraction of weak lensing shear from the full survey dataset, and massive seasonal reprocessing of the DES data. Data products will be available to the Collaboration and later to the public through a virtual-observatory compatible web portal. Our approach leverages investments in publicly available HPC systems, greatly reducing hardware and maintenance costs to the project, which must deploy and maintain only the storage, database platforms and orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we tested the current DESDM system on both simulated and real survey data. We used Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and calibrating approximately 250 million objects into the DES Archive database. We also used DESDM to process and calibrate over 50 nights of survey data acquired with the Mosaic2 camera. Comparison to truth tables in the case of the simulated data and internal crosschecks in the case of the real data indicate that astrometric and photometric data quality is excellent.Comment: To be published in the proceedings of the SPIE conference on Astronomical Instrumentation (held in Marseille in June 2008). This preprint is made available with the permission of SPIE. Further information together with preprint containing full quality images is available at http://desweb.cosmology.uiuc.edu/wik
    • …
    corecore