15,937 research outputs found
Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments
Data centres that use consumer-grade disks drives and distributed
peer-to-peer systems are unreliable environments to archive data without enough
redundancy. Most redundancy schemes are not completely effective for providing
high availability, durability and integrity in the long-term. We propose alpha
entanglement codes, a mechanism that creates a virtual layer of highly
interconnected storage devices to propagate redundant information across a
large scale storage system. Our motivation is to design flexible and practical
erasure codes with high fault-tolerance to improve data durability and
availability even in catastrophic scenarios. By flexible and practical, we mean
code settings that can be adapted to future requirements and practical
implementations with reasonable trade-offs between security, resource usage and
performance. The codes have three parameters. Alpha increases storage overhead
linearly but increases the possible paths to recover data exponentially. Two
other parameters increase fault-tolerance even further without the need of
additional storage. As a result, an entangled storage system can provide high
availability, durability and offer additional integrity: it is more difficult
to modify data undetectably. We evaluate how several redundancy schemes perform
in unreliable environments and show that alpha entanglement codes are flexible
and practical codes. Remarkably, they excel at code locality, hence, they
reduce repair costs and become less dependent on storage locations with poor
availability. Our solution outperforms Reed-Solomon codes in many disaster
recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially
supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018
48th Annual IEEE/IFIP International Conference on Dependable Systems and
Networks (DSN
On Issues, Strategies and Solutions for Computer Security and Disaster Recovery in Online Start-ups
Vast majority of entrepreneurial ventures want an online and offline business model. Quite a good number would prefer their dealings occur strictly online. However, very few know what it takes to aim at achieving 99.999% availability, this is a key goal in deploying Computer and information technology (IT) solutions. In this present world of Information Technology there is an increase in threats faced by small medium businesses and enterprise on online platforms. More companies are vulnerable to attacks/threat such as DDOS, Malwares, Viruses, Ransomware etc. Entrepreneurial venture’s adoption of IT solutions with security in view, in addition to a disaster avoidance, mitigation and recovery plan or strategy can help in this respect. This paper suggests such issues to be considered and strategies to adopt in IT security and avoiding disaster and solutions to remedy disaster
Middleware-based Database Replication: The Gaps between Theory and Practice
The need for high availability and performance in data management systems has
been fueling a long running interest in database replication from both academia
and industry. However, academic groups often attack replication problems in
isolation, overlooking the need for completeness in their solutions, while
commercial teams take a holistic approach that often misses opportunities for
fundamental innovation. This has created over time a gap between academic
research and industrial practice.
This paper aims to characterize the gap along three axes: performance,
availability, and administration. We build on our own experience developing and
deploying replication systems in commercial and academic settings, as well as
on a large body of prior related work. We sift through representative examples
from the last decade of open-source, academic, and commercial database
replication systems and combine this material with case studies from real
systems deployed at Fortune 500 customers. We propose two agendas, one for
academic research and one for industrial R&D, which we believe can bridge the
gap within 5-10 years. This way, we hope to both motivate and help researchers
in making the theory and practice of middleware-based database replication more
relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on
Management of Data, Vancouver, Canada, June 200
A First Approach in the Assessment of the Complexity of Disaster Recovery Models for SMEs
In an organization, a well devised disaster recovery plan is not only crucial in the information recovery process, but also vital in the quest to sustain daily operations. While prior research has discussed many recovery sites options, assessment of recovery site communication paths and their associated complexity is still limited in regard to the evaluation of disaster recovery (DR) models. Using the scale-free degree distribution formula, the authors present a methodical discussion concerning the network characteristics of various disaster recovery options. This study marks a pioneering effort in the DR field by applying the scale-free degree distribution formula to assess the network complexity index and overall model failure points. In addition, a modified hot model employing host virtualization designed especially for small and medium size businesses is presented. This method is particularly advantageous to small and medium size businesses as it leverages inexpensive commercial PC hardware
- …