32,823 research outputs found

    Firebird Database Backup by Serialized Database Table Dump

    Get PDF
    This paper presents a simple data dump and load utility for Firebird databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load, for dumping and loading respectively, retrieves each database table using kinterbasdb and serializes the data using marshal module. This utility has two advantages over the standard Firebird database backup utility, gbak. Firstly, it is able to backup and restore single database tables which might help to recover corrupted databases. Secondly, the output is in text-coded format (from marshal module) making it more resilient than a compressed text backup, as in the case of using gbak.Comment: 5 page

    Chapter 16 Cloud Backup and Restore

    Get PDF
    Digital devices are prone to failure. An increasing range of cloud backup solutions aim to ensure that no matter what should happen to a user’s device, their files and data can be quickly re-downloaded and re-installed on a new device with ease. If the failure or breakdown of a digital device may once have resulted in a potentially devastating data loss event, cloud backup and recovery tools work to reduce the disruptive impact of device failure. This has implications for theories of failure that are based on the premise that breakdown or failure are disruptive events. Drawing on Apple’s cloud-based data backup and restore service, this chapter conceptualises the cloud as an infrastructure designed to anticipate and absorb digital failure. In doing so, it explores how cloud services bolster cultures of routine device upgrading and e-waste production

    Exploring heterogeneity of unreliable machines for p2p backup

    Full text link
    P2P architecture is a viable option for enterprise backup. In contrast to dedicated backup servers, nowadays a standard solution, making backups directly on organization's workstations should be cheaper (as existing hardware is used), more efficient (as there is no single bottleneck server) and more reliable (as the machines are geographically dispersed). We present the architecture of a p2p backup system that uses pairwise replication contracts between a data owner and a replicator. In contrast to standard p2p storage systems using directly a DHT, the contracts allow our system to optimize replicas' placement depending on a specific optimization strategy, and so to take advantage of the heterogeneity of the machines and the network. Such optimization is particularly appealing in the context of backup: replicas can be geographically dispersed, the load sent over the network can be minimized, or the optimization goal can be to minimize the backup/restore time. However, managing the contracts, keeping them consistent and adjusting them in response to dynamically changing environment is challenging. We built a scientific prototype and ran the experiments on 150 workstations in the university's computer laboratories and, separately, on 50 PlanetLab nodes. We found out that the main factor affecting the quality of the system is the availability of the machines. Yet, our main conclusion is that it is possible to build an efficient and reliable backup system on highly unreliable machines (our computers had just 13% average availability)

    Research for backup possibilities of websites created in WordPress

    Get PDF
    В статье проведено исследование возможностей резервного копирования сайтов. Проведен анализ существующих плагинов. Предложен backup-плагин, разработанный на основе системы контроля версий GIT. Плагин позволяет осуществлять инкрементное и дифференциальное резервное копирование, восстанавливать backup из конкретной точки, а также автоматизировать процесс создания резервной копии посредством настройки расписания.In order to simplify the process of creating and updating sites, content management systems (CMS) are used that provide the ability to jointly create, edit and publish content. One of the most popular CMS is WordPress. This article contains research details for backup possibilities of websites, developed in WordPress. Backup is the one of the essential components of site security. Backup is designed to restore data in case of loss or damage in the original location. Keeping only the development version is not sufficient, because the changes that are made to the site are often not reflected in its original version. Backing up sites involves creating a copy of the site's database or the site's file system. Backup should be performed regularly with frequency which is depending on the frequency of changes made to the site. To automate the backup process, special programs are used, as well as WordPress plugins that will allow to expand its capabilities

    Backup and Restore Plan of IS / IT

    Get PDF
    Cílem mé bakalářské práce je popis tvorby plánu zálohování a obnovy IS ve společnosti ABC Holding a.s. Úvodem popisuji charakter IS společnosti, analyzuji funkci IS ve společnosti a zabývám se možnostmi teoretických východisek. Závěrem navrhuji postup tvorby plánu zálohování a obnovy.My bachelor’s thesis describes the creation of the IS backup and recovery project in the company ABC Holding Inc. I describe the company IS status at the beginning, analyze the IS function in the corporation and I consider possibilities of the theoretical resources. I propose the procedure of the backup and recovery project creation in conclusion.

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques
    corecore