7,573 research outputs found

    The Security Rule

    Get PDF

    Space data management at the NSSDC (National Space Sciences Data Center): Applications for data compression

    Get PDF
    The National Space Science Data Center (NSSDC), established in 1966, is the largest archive for processed data from NASA's space and Earth science missions. The NSSDC manages over 120,000 data tapes with over 4,000 data sets. The size of the digital archive is approximately 6,000 gigabytes with all of this data in its original uncompressed form. By 1995 the NSSDC digital archive is expected to more than quadruple in size reaching over 28,000 gigabytes. The NSSDC digital archive is expected to more than quadruple in size reaching over 28,000 gigabytes. The NSSDC is beginning several thrusts allowing it to better serve the scientific community and keep up with managing the ever increasing volumes of data. These thrusts involve managing larger and larger amounts of information and data online, employing mass storage techniques, and the use of low rate communications networks to move requested data to remote sites in the United States, Europe and Canada. The success of these thrusts, combined with the tremendous volume of data expected to be archived at the NSSDC, clearly indicates that innovative storage and data management solutions must be sought and implemented. Although not presently used, data compression techniques may be a very important tool for managing a large fraction or all of the NSSDC archive in the future. Some future applications would consist of compressing online data in order to have more data readily available, compress requested data that must be moved over low rate ground networks, and compress all the digital data in the NSSDC archive for a cost effective backup that would be used only in the event of a disaster

    2012 Grantmakers Information Technology Survey Report

    Get PDF
    Together the Technology Affinity Group (TAG) and Grants Managers Network (GMN) conducted an information technology survey of grantmaking organizations in July 2012. This survey serves as a follow?up to similar surveys TAG has conducted in collaboration with the Council on Foundation (The Council) in April 2003, July 2005, and June 2007, and then independently in 2010

    Using Infrastructure as Code for Web Application Disaster Recovery

    Get PDF
    Legacy, industry established disaster recovery approaches are known for impeding a relatively high additional expenditure, thus limiting the usage of such mechanisms only to the most business-critical IT systems and applications. However, with the emergence of Infrastructure-as-Code practices, this paradigm can now be challenged. The objective of this thesis is to design and implement a novel disaster recovery tool, that can be used for the recovery of a web application. By following the design science methodology, this thesis proposes a primary-fallback oriented disaster recovery model, where the fallback site of the infrastructure is an empty cloud service account, into which a near duplicate copy of the primary site is recreated in the event of a disaster. The proposed recovery process consists of two phases, where the 2nd phase stateful application data recovery procedure is kept as an add-on functionality to the 1st phase stateless infrastructure management practices. For switching from primary to fallback site, the design proposes a DNS failover mechanism, whereby modifying the DNS A-record associations of the Public IP address during the start of the recovery process, traffic can be directed to the recovered site with a minimal delay. Based on the insights and data gathered during and after the evaluation phase of the proposed design, the tool created with Ansible and Terraform was found to be functional, performant and cost efficient within the known limits and expectations set by legacy disaster recovery practices
    • …
    corecore