23 research outputs found

    HYPERVISOR-AGNOSTIC SYSTEM AND METHOD FOR MODIFICATION OF SNAPSHOT FILES FOR RECOVERY

    Get PDF
    This proposal provides a general-purpose technique to reconfigure the execution environment for virtual machines to facilitate a file patch service for modifying snapshot files during virtual machine recovery. The technique is hypervisor-agnostic and may be implemented within many different types of environments

    A Business Continuity Solution for Telecommunications Billing Systems

    Get PDF
    The billing system is a critical component in a Telecommunications service provider\u27s suite of business support systems - without the billing system the provider cannot invoice their customers for services provided and therefore cannot generate revenue. Typically billing systems are hosted on a single large Unix/Oracle system located in the company\u27s data centre. Modern Unix servers with their redundant components and hot swap parts are highly resilient and can provide levels of availability when correctly installed in properly managed data centre with uninterruptible power supplies, cooling etc. High Availability clustering through the use of HP MC/ServiceGuard, Sun Cluster, IBM HACMP (High Availability Cluster Multi-Processing) or Oracle Clusterware/RAC (Real Application clusters) can bring this level of availability even higher. This approach however can only protect against the failure of a single server or component of the system, it cannot protect against the loss of an entire data centre in the event of a disaster such as a fire, flood or earthquake. In order to protect against such disasters it is necessary to provide some form of backup system on a site sufficiently remote from the primary site so that it would not be affected by any disaster, which might befall the primary site. This paper proposes a cost effective business continuity solution to protect a Telecommunications Billing system from the effects of unplanned downtime due to server or site outages. It is aimed at the smaller scale tier 2 and tier 3 providers such as Mobile Virtual Network Operators (MVNOs) and startup Competitive Local Exchange Carriers (CLECs) who are unlikely to have large established IT systems with business continuity features and for whom cost effectiveness is a key concern when implementing IT systems

    A State-Of-The-Art Review of Cloud Forensics

    Get PDF
    Cloud computing and digital forensics are emerging fields of technology. Unlike traditional digital forensics where the target environment can be almost completely isolated, acquired and can be under the investigators control; in cloud environments, the distribution of computation and storage poses unique and complex challenges to the investigators. Recently, the term “cloud forensics” has an increasing presence in the field of digital forensics. In this state-of-the-art review, we included the most recent research efforts that used “cloud forensics” as a keyword and then classify the literature into three dimensions: (1) survey-based, (2) technology-based and (3) forensics-procedural-based. We discuss widely accepted standard bodies and their efforts to address the current trend of cloud forensics. Our aim is not only to reference related work based on the discussed dimensions, but also to analyse them and generate a mind map that will help in identifying research gaps. Finally, we summarize existing digital forensics tools and the available simulation environments that can be used for evidence acquisition, examination and cloud forensics test purposes

    Processing and Managing the Kepler Mission's Treasure Trove of Stellar and Exoplanet Data

    Get PDF
    The Kepler telescope launched into orbit in March 2009, initiating NASAs first mission to discover Earth-size planets orbiting Sun-like stars. Kepler simultaneously collected data for 160,000 target stars at a time over its four-year mission, identifying over 4700 planet candidates, 2300 confirmed or validated planets, and over 2100 eclipsing binaries. While Kepler was designed to discover exoplanets, the long term, ultra- high photometric precision measurements it achieved made it a premier observational facility for stellar astrophysics, especially in the field of asteroseismology, and for variable stars, such as RR Lyraes. The Kepler Science Operations Center (SOC) was developed at NASA Ames Research Center to process the data acquired by Kepler from pixel-level calibrations all the way to identifying transiting planet signatures and subjecting them to a suite of diagnostic tests to establish or break confidence in their planetary nature. Detecting small, rocky planets transiting Sun-like stars presents a variety of daunting challenges, from achieving an unprecedented photometric precision of 20 parts per million (ppm) on 6.5-hour timescales, supporting the science operations, management, processing, and repeated reprocessing of the accumulating data stream. This paper describes how the design of the SOC meets these varied challenges, discusses the architecture of the SOC and how the SOC pipeline is operated and is run on the NAS Pleiades supercomputer, and summarizes the most important pipeline features addressing the multiple computational, image and signal processing challenges posed by Kepler

    Automated Database Refresh in Very Large and Highly Replicated Environments

    Get PDF
    Refreshing non-production database environments is a fundamental activity. A non-productive environment must closely and approximately be related to the productive system and be populated with accurate, consistent data so that the changes before moving into the production system can be tested more effectively. Also if the development system has more related scenario as that of a live system then programming in-capabilities can be minimized. These scenarios add more pressure to get the system refreshed from the production system frequently. Also many organizations need a proven and performant solution to creating or moving data into their nonproduction environments that will neither break business rules, nor expose confidential information to non-privileged contractors and employees. But the academic literature on refreshing non-production environments is weak, restricted largely to instruction steps. To correct this situation, this study examines ways to refresh the development, QA, test or stage environments with production quality data, while maintaining the original structures. Using this method, developer\u27s and tester\u27s releases being promoted to the production environment are not impacted after a refresh. The study includes the design, development and testing of a system which semi-automatically backs up (saves a copy of) the current database structures, then takes a clone of the production database from the reporting or Oracle Recovery Manager servers, and then reapplies the structures and obfuscates confidential data. The study used an Oracle Real Application Cluster (RAC) environment for refresh. The findings identify methodologies to perform the refresh of non-production environments in a timely manager without exposing confidential data, and without over-writing the current structures in the database being refreshed. They also identify areas of significant savings in terms of time and money that can be made by keeping the structures for developers with freshened data

    Infrastructure-as-a-Service Usage Determinants in Enterprises

    Get PDF
    The thesis focuses on the research question, what the determinants of Infrastructure-as-a-Service usage of enterprises are. A wide range of IaaS determinants is collected for an IaaS adoption model of enterprises, which is evaluated in a Web survey. As the economical determinants are especially important, they are separately investigated using a cost-optimizing decision support model. This decision support model is then applied to a potential IaaS use case of a large automobile manufacturer

    Network Function Virtualization over Cloud-Cloud Computing as Business Continuity Solution

    Get PDF
    Cloud computing provides resources by using virtualization technology and a pay-as-you-go cost model. Network Functions Virtualization (NFV) is a concept, which promises to grant network operators the required flexibility to quickly develop and provision new network functions and services, which can be hosted in the cloud. However, cloud computing is subject to failures which emphasizes the need to address user’s availability requirements. Availability refers to the cloud uptime and the cloud capability to operate continuously. Providing highly available services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Different techniques can be implemented to increase the system’s availability and assure business continuity. This chapter covers cloud computing as business continuity solution and cloud service availability. This chapter also covers the causes of service unavailability and the impact due to service unavailability. Further, this chapter covers various ways to achieve the required cloud service availability

    Evaluating disaster recovery plans using the cloud

    Full text link
    corecore