3 research outputs found

    Cloud adoption hurdles, competence model, and opportunities in the African context: proof from Ethiopia

    Get PDF
    Cloud computing refers to both the resources provided over the internet as services and the systems software and hardware in the data centres that provide these resources. These resources can then be used by users for various purposes and provide the benefits of low ongoing cost, more computational power, and optimization of processes of computing among others. To take advantage of these benefits, adopting the cloud and the cloud computing paradigm is a necessary step and has the potential to transform Information Technology (IT) capabilities in developing and under-developed countries. However, in these countries, currently there are some adoption hurdles around this technology. Government agencies need to balance and regulate both hurdles and hype around the technology. Before cloud can be widely adopted, a systematic model of cloud adoption needs to be designed which can help the agencies in charge to navigate the hurdles and the hype. In this work, we have studied this problem in the context of adoption in Africa. The aim of this research is to investigate local cloud adoption threats, hurdles, synergies, opportunities, human capabilities, and other disciplines’ theories to design a model which will serve as a guide to the local cloud adoption hurdles in the African context, especially in Ethiopia. More specifically, the key intention and goal of this research is twofold: first, to assimilate the existing game theory and reverse engineering theory, that is, the part of economic theory into the cloud adoption techniques, and second, to look at the effects of open source cloud computing resources on the reduction of aforementioned hurdles via experimentation with OpenStack. The OpenStack is used as a test-bed for the designed mechanism for building a private cloud for the targeted organization to examine the competence of IT experts and pave the way for future research. The model is designed through various context-based competence possibilities for academia and government. It can be used to mitigate the bottlenecks that arise from the lack of up-to-date cloud knowledge, the lack of a context-based model, the lack of government control, and the lack of well-poised competent IT experts. These bottlenecks lead to the lack of hands-on technical skills, confusion in cloud adoption lack of standard models, under-utilizations of the opportunities of open source cloud platforms, and loose interpretations around the security, trust, legal, regulatory model, control mechanism, and privacy issues. This research is foundational in nature which assimilates and translates well-established theories of other disciplines into a theory of systematic cloud adoption. The assimilated model minimizes the cloud adoption hurdles by maximizing government power to facilitate, regulate, understand the cloud adoption complexity, and control the cloud adoption rate. It is also a useful lens for cloud experts to see how each hurdle is paired up with some opportunities as it maximizes their competence

    An ontology for risk management of digital collections

    Get PDF
    Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community

    DSaaS : a cloud service for persistent data structures

    Get PDF
    CITATION: Le Roux, P. B., Kroon, S. & Bester, W. 2016. DSaaS : a cloud service for persistent data structures. CLOSER 2016: 6th International Conference on Cloud Computing and Services Science, Rome, Italy, 23-25 April 2016.The original publication is available at http://closer.scitevents.org/In an attempt to tackle shortcomings of current approaches to collaborating on the development of structured data sets, we present a prototype platform that allows users to share and collaborate on the development of data structures via a web application, or by using language bindings or an API. Using techniques from the theory of persistent linked data structures, the resulting platform delivers automatically version-controlled map and graph abstract data types as a web service. The core of the system is provided by a Hash Array Mapped Trie (HAMT) which is made confluently persistent by path-copying. The system aims to make efficient use of storage, and to have consistent access and update times regardless of the version being accessed or modified.http://closer.scitevents.org/Post-prin
    corecore