13,660 research outputs found
Assessing database and network threats in traditional and cloud computing
Cloud Computing is currently one of the most widely-spoken terms in IT. While it offers a range of technological and financial benefits, its wide acceptance by organizations is not yet wide spread. Security concerns are a main reason for this and this paper studies the data and network threats posed in both traditional and cloud paradigms in an effort to assert in which areas cloud computing addresses security issues and where it does introduce new ones. This evaluation is based on Microsoft’s STRIDE threat model and discusses the stakeholders, the impact and recommendations for tackling each threat
Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research
This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) are performed and results confirm backup automation is completed swiftly and is reliable for data-intensive research. The data recovery result confirms that execution time is in proportion to quantity of recovered data, but the failure rate increases in an exponential manner. The data migration result confirms execution time is in proportion to disk volume of migrated data, but again the failure rate increases in an exponential manner. In addition, benefits of CCAF are illustrated using several bioinformatics examples such as tumour modelling, brain imaging, insulin molecules and simulations for medical training. Our Cloud Storage solution described here offers cost reduction, time-saving and user friendliness
Management information systems in social safety net programs : a look at accountability and control mechanisms
This paper is intended to provide task managers and World Bank Group clients working on Social Safety Net (SSN) programs with practical and systematic ways to use information management practices to mitigate risks by strengthening control and accountability mechanisms. It lays out practices and options to consider in the design and implementation of the Management Information System (MIS), and how to evaluate and mitigate operational risks originating from running a MIS. The findings of the paper are based on the review of several Conditional Cash Transfer (CCT) programs in the Latin American Region and various World Bank publications on CCTs. The paper presents a framework for the implementation of MIS and cross-cutting information management systems that is based on industry standards and information management practices. This framework can be applied both to programs that make use of information and communications technology (ICT) and programs that are paper based. It includes examples of MIS practices that can strengthen control and accountability mechanisms of SSN programs, and presents a roadmap for the design and implementation of an MIS in these programs. The application of the framework is illustrated through case studies from three fictitious countries. The paper concludes with some considerations and recommendations for task managers and government officials in charge of implementing CCTs and other safety nets program, and with a checklist for the implementation and monitoring of MIS.E-Business,Technology Industry,Education for Development (superceded),Labor Policies,Knowledge Economy
Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments
Data centres that use consumer-grade disks drives and distributed
peer-to-peer systems are unreliable environments to archive data without enough
redundancy. Most redundancy schemes are not completely effective for providing
high availability, durability and integrity in the long-term. We propose alpha
entanglement codes, a mechanism that creates a virtual layer of highly
interconnected storage devices to propagate redundant information across a
large scale storage system. Our motivation is to design flexible and practical
erasure codes with high fault-tolerance to improve data durability and
availability even in catastrophic scenarios. By flexible and practical, we mean
code settings that can be adapted to future requirements and practical
implementations with reasonable trade-offs between security, resource usage and
performance. The codes have three parameters. Alpha increases storage overhead
linearly but increases the possible paths to recover data exponentially. Two
other parameters increase fault-tolerance even further without the need of
additional storage. As a result, an entangled storage system can provide high
availability, durability and offer additional integrity: it is more difficult
to modify data undetectably. We evaluate how several redundancy schemes perform
in unreliable environments and show that alpha entanglement codes are flexible
and practical codes. Remarkably, they excel at code locality, hence, they
reduce repair costs and become less dependent on storage locations with poor
availability. Our solution outperforms Reed-Solomon codes in many disaster
recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially
supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018
48th Annual IEEE/IFIP International Conference on Dependable Systems and
Networks (DSN
Enriched multi objective optimization model based cloud disaster recovery
AbstractIn cloud computing massive data storage is one of the great challenging tasks in term of reliable storage of sensitive data and quality of storage service. Among various cloud safety issues, the data disaster recovery is the most significant issue which is required to be considered. Thus, in this paper, analysis of massive data storage process in the cloud environment is performed and the massive data storage cost is based on the data storage price, communication cost and data migration cost. The data storage reliability involves of data transmission, hardware dependability and reliability. Reliable massive storage is proposed by using Enriched Multi Objective Optimization Model (EMOOM). The main objective of this proposed optimization model is using Enriched Genetic Algorithm (EGA) for efficient Disaster Recovery in a cloud environment. Finally, the experimental results show that the proposed EMOOM model is effective and positive and reliable
- …