15 research outputs found

    Assessing the evidential value of artefacts recovered from the cloud

    Get PDF
    Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP

    A new MDA-SOA based framework for intercloud interoperability

    Get PDF
    Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective. This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime. Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.Fundação para a Ciência e a Tecnologia (FCT) - (Referencia da bolsa: SFRH SFRH / BD / 33965 / 2009) and EC 7th Framework Programme under grant agreement n° FITMAN 604674 (http://www.fitman-fi.eu

    openstack

    Get PDF
    Σκοπός της Διπλωματικής εργασίας είναι η παρουσίαση του OpenStack. Ένα ανοιχτό λογισμικό διαχείρισης των τηλεπικοινωνιακών πόρων σε cloud περιβάλλον. Για την εκπόνηση της Διπλωματικής εργασίας γίνεται αρχικά μία περιγραφή της αρχιτεκτονικής του cloud περιβάλλοντος, και των μοντέλων εξυπηρέτησης που χρησιμοποιούνται. Εν συνεχεία παρουσιάζεται η αρχιτεκτονική Network Function Virtualization που εφαρμόζεται στις τηλεπικοινωνίες σύμφωνα με τα πρότυπα που έχει θέσει ο Ευρωπα’ι’κός Οργανισμός Τηλεπικοινωνιακών Προτύπων. Το κύριο θέμα της Διπλωματικής Εργασίας είναι η παρουσίαση του λογισμικού OpenStack που χρησιμοποιείται από την NFV αρχιτεκτονική. Στα κεφάλαια αυτά γίνεται μία προσπάθεια όσο το δυνατόν λεπτομερέστερης και πληρέστερης περιγραφής των λειτουργιών του OpenStack καθώς και τα μέρη από τα αποία αποτελείται. Τέλος γίνεται μία τεχνοοικονομική ανάλυση του κόστους εφαρμογής της NFV αρχιτεκτονικής με την τωρινή αρχιτεκτονική που εφαρμόζεται στα Τηλεπιοκοινωνιακά δίκτυα. Τα αποτελέσματα τα οποία προκύπτουν από την παρούσα εργασία είναι η πολλές δυνατότητες υλοπόιησης και εφαρμογής της νέας αρχιτεκτονικής καθώς και το πολύ χαμηλό κόστος λειτουργίας της σε σχέση με την υφιστάμενη εώς τώρα τεχνολογίαThe aim of this thesis is the presentation of OpenStack. An open software management of telecommunications resources in cloud environment. For the preparation of this thesis is first a description of the architecture of cloud environment, and service models used. Architecture Network Function Virtualization occurs then applied to telecommunications in accordance with standards set by the European Telecommunications Standards Institute. The main topic of the thesis is to present the OpenStack software used by the NFV architecture. In these chapters an attempt is as detailed and comprehensive description of the OpenStack functions and parts of the colony consists. Finally there is one techno-economic analysis of the cost of implementing NFV architecture with the current architecture applied to Telecommunication networks. The results derived from this work is a lot of potential implementation and application of the new architecture and the very low operating costs compared with existing technology up to no

    Proceedings of the 5th bwHPC Symposium

    Get PDF
    In modern science, the demand for more powerful and integrated research infrastructures is growing constantly to address computational challenges in data analysis, modeling and simulation. The bwHPC initiative, founded by the Ministry of Science, Research and the Arts and the universities in Baden-Württemberg, is a state-wide federated approach aimed at assisting scientists with mastering these challenges. At the 5th bwHPC Symposium in September 2018, scientific users, technical operators and government representatives came together for two days at the University of Freiburg. The symposium provided an opportunity to present scientific results that were obtained with the help of bwHPC resources. Additionally, the symposium served as a platform for discussing and exchanging ideas concerning the use of these large scientific infrastructures as well as its further development

    Emerging Informatics

    Get PDF
    The book on emerging informatics brings together the new concepts and applications that will help define and outline problem solving methods and features in designing business and human systems. It covers international aspects of information systems design in which many relevant technologies are introduced for the welfare of human and business systems. This initiative can be viewed as an emergent area of informatics that helps better conceptualise and design new world-class solutions. The book provides four flexible sections that accommodate total of fourteen chapters. The section specifies learning contexts in emerging fields. Each chapter presents a clear basis through the problem conception and its applicable technological solutions. I hope this will help further exploration of knowledge in the informatics discipline

    Intelligent Management of Virtualised Computer Based Workloads and Systems

    Get PDF
    Managing the complexity within virtualised IT infrastructure platforms is a common problem for many organisations today. Computer systems are often highly consolidated into a relatively small physical footprint compared with previous decades prior to late 2000s, so much thought, planning and control is necessary to effectively operate such systems within the enterprise computing space. With the development of private, hybrid and public cloud utility computing this has become even more relevant; this work examines how such cloud systems are using virtualisation technology and embedded software to leverage advantages, and it uses a fresh approach of developing and creating an Intelligent decision engine (expert system). Its aim is to help reduce the complexity of managing virtualised computer-based platforms, through tight integration, high-levels of automation to minimise human inputs, errors, and enforce standards and consistency, in order to achieve better management and control. The thesis investigates whether an expert system known as the Intelligent Decision Engine (IDE) could aid the management of virtualised computer-based platforms. Through conducting a series of mixed quantitative and qualitative experiments in the areas of research, the initial findings and evaluation are presented in detail, using repeatable and observable processes and provide detailed analysis on the recorded outputs. The results of the investigation establish the advantages of using the IDE (expert system) to achieve the goal of reducing the complexity of managing virtualised computer-based platforms. In each detailed area examined, it is demonstrated how using a global management approach in combination with VM provisioning, migration, failover, and system resource controls can create a powerful autonomous system

    Scalability in extensible and heterogeneous storage systems

    Get PDF
    The evolution of computer systems has brought an exponential growth in data volumes, which pushes the capabilities of current storage architectures to organize and access this information effectively: as the unending creation and demand of computer-generated data grows at an estimated rate of 40-60% per year, storage infrastructures need increasingly scalable data distribution layouts that are able to adapt to this growth with adequate performance. In order to provide the required performance and reliability, large-scale storage systems have traditionally relied on multiple RAID-5 or RAID-6 storage arrays, interconnected with high-speed networks like FibreChannel or SAS. Unfortunately, the performance of the current, most commonly-used storage technology-the magnetic disk drive-can't keep up with the rate of growth needed to sustain this explosive growth. Moreover, storage architectures based on solid-state devices (the successors of current magnetic drives) don't seem poised to replace HDD-based storage for the next 5-10 years, at least in data centers. Though the performance of SSDs significantly improves that of hard drives, it would cost the NAND industry hundreds of billions of dollars to build enough manufacturing plants to satisfy the forecasted demand. Besides the problems derived from technological and mechanical limitations, the massive data growth poses more challenges: to build a storage infrastructure, the most flexible approach consists in using pools of storage devices that can be expanded as needed by adding new devices or replacing older ones, thus seamlessly increasing the system's performance and capacity. This approach however, needs data layouts that can adapt to these topology changes and also exploit the potential performance offered by the hardware. Such strategies should be able to rebuild the data layout to accommodate the new devices in the infrastructure, extracting the utmost performance from the hardware and offering a balanced workload distribution. An inadequate data layout might not effectively use the enlarged capacity or better performance provided by newer devices, thus leading to unbalancing problems like bottlenecks or resource underusage. Besides, massive storage systems will inevitably be composed of a collection of heterogeneous hardware: as capacity and performance requirements grow, new storage devices must be added to cope with demand, but it is unlikely that these devices will have the same capacity or performance of those installed. Moreover, upon failure, disks are most commonly replaced by faster and larger ones, since it is not always easy (or cheap) to find a particular model of drive. In the long run, any large-scale storage system will have to cope with a myriad of devices. The title of this dissertation, "Scalability in Extensible and Heterogeneous Storage Systems", refers to the main focus of our contributions in scalable data distributions that can adapt to increasing volumes of data. Our first contribution is the design of a scalable data layout that can adapt to hardware changes while redistributing only the minimum data to keep a balanced workload. With the second contribution, we perform a comparative study on the influence of pseudo-random number generators in the performance and distribution quality of randomized layouts and prove that a badly chosen generator can degrade the quality of the strategy. Our third contribution is an an analysis of long-term data access patterns in several real-world traces to determine if it is possible to offer high performance and a balanced load with less than minimal data rebalancing. In our final contribution, we apply the knowledge learnt about long-term access patterns to design an extensible RAID architecture that can adapt to changes in the number of disks without migrating large amounts of data, and prove that it can be competitive with current RAID arrays with an overhead of at most 1.28% the storage capacity.L'evolució dels sistemes de computació ha dut un creixement exponencial dels volums de dades, que porta al límit la capacitat d'organitzar i accedir informació de les arquitectures d'emmagatzemament actuals. Amb una incessant creació de dades que creix a un ritme estimat del 40-60% per any, les infraestructures de dades requereixen de distribucions de dades cada cop més escalables que puguin adaptar-se a aquest creixement amb un rendiment adequat. Per tal de proporcionar aquest rendiment, els sistemes d'emmagatzemament de gran escala fan servir agregacions RAID5 o RAID6 connectades amb xarxes d'alta velocitat com FibreChannel o SAS. Malauradament, el rendiment de la tecnologia més emprada actualment, el disc magnètic, no creix prou ràpid per sostenir tal creixement explosiu. D'altra banda, les prediccions apunten que els dispositius d'estat sòlid, els successors de la tecnologia actual, no substituiran els discos magnètics fins d'aquí 5-10 anys. Tot i que el rendiment és molt superior, la indústria NAND necessitarà invertir centenars de milions de dòlars per construir prou fàbriques per satisfer la demanda prevista. A més dels problemes derivats de limitacions tècniques i mecàniques, el creixement massiu de les dades suposa més problemes: la solució més flexible per construir una infraestructura d'emmagatzematge consisteix en fer servir grups de dispositius que es poden fer créixer bé afegint-ne de nous, bé reemplaçant-ne els més vells, incrementant així la capacitat i el rendiment del sistema de forma transparent. Aquesta solució, però, requereix distribucions de dades que es puguin adaptar a aquests canvis a la topologia i explotar el rendiment potencial que el hardware ofereix. Aquestes distribucions haurien de poder reconstruir la col.locació de les dades per acomodar els nous dispositius, extraient-ne el màxim rendiment i oferint una càrrega de treball balancejada. Una distribució inadient pot no fer servir de manera efectiva la capacitat o el rendiment addicional ofert pels nous dispositius, provocant problemes de balanceig com colls d¿ampolla o infrautilització. A més, els sistemes d'emmagatzematge massius estaran inevitablement formats per hardware heterogeni: en créixer els requisits de capacitat i rendiment, es fa necessari afegir nous dispositius per poder suportar la demanda, però és poc probable que els dispositius afegits tinguin la mateixa capacitat o rendiment que els ja instal.lats. A més, en cas de fallada, els discos són reemplaçats per d'altres més ràpids i de més capacitat, ja que no sempre és fàcil (o barat) trobar-ne un model particular. A llarg termini, qualsevol arquitectura d'emmagatzematge de gran escala estarà formada per una miríade de dispositius diferents. El títol d'aquesta tesi, "Scalability in Extensible and Heterogeneous Storage Systems", fa referència a les nostres contribucions a la recerca de distribucions de dades escalables que es puguin adaptar a volums creixents d'informació. La primera contribució és el disseny d'una distribució escalable que es pot adaptar canvis de hardware només redistribuint el mínim per mantenir un càrrega de treball balancejada. A la segona contribució, fem un estudi comparatiu sobre l'impacte del generadors de números pseudo-aleatoris en el rendiment i qualitat de les distribucions pseudo-aleatòries de dades, i provem que una mala selecció del generador pot degradar la qualitat de l'estratègia. La tercera contribució és un anàlisi dels patrons d'accés a dades de llarga duració en traces de sistemes reals, per determinar si és possible oferir un alt rendiment i una bona distribució amb una rebalanceig inferior al mínim. A la contribució final, apliquem el coneixement adquirit en aquest estudi per dissenyar una arquitectura RAID extensible que es pot adaptar a canvis en el número de dispositius sense migrar grans volums de dades, i demostrem que pot ser competitiva amb les distribucions ideals RAID actuals, amb només una penalització del 1.28% de la capacita

    MSL Framework: (Minimum Service Level Framework) for Cloud Providers and Users

    Get PDF
    Cloud Computing ensures parallel computing and emerged as an efficient technology to meet the challenges of rapid growth of data that we experienced in this Internet age. Cloud computing is an emerging technology that offers subscription based services, and provide different models such as IaaS, PaaS and SaaS among other models to cater the needs of different user groups. The technology has enormous benefits but there are serious concerns and challenges related to lack of uniform standards or nonexistence of minimum benchmark for level of services offered across the industry to provide an effective, uniform and reliable service to the cloud users. As the cloud computing is gaining popularity, organizations and users are having problems to adopt the service ue to lack of minimum service level framework which can act as a benchmark in the selection of the cloud provider and provide quality of service according to the user’s expectations. The situation becomes more critical due to distributed nature of the service provider which can be offering service from any part of the world. Due to lack of minimum service level framework that will act as a benchmark to provide a uniform service across the industry there are serious concerns raised recently interms of security and data privacy breaches, authentication and authorization issues, lack of third party audit and identity management problems, integrity, confidentiality and variable data availability standards, no uniform incident response and monitoring standards, interoperability and lack of portability standards, identity management issues, lack of infrastructure protection services standards and weak governance and compliance standards are major cause of concerns for cloud users. Due to confusion and absence of universal agreed SLAs for a service model, different quality of services is being provided across the cloud industry. Currently there is no uniform performance model agreed by all stakeholders; which can provide performance criteria to measure, evaluate, and benchmark the level of services offered by various cloud providers in the industry. With the implementation of General Data Protection Regulation (GDPR) and demand from cloud users to have Green SLAs that provides better resource allocations mechanism, there will be serious implications for the cloud providers and its consumers due to lack of uniformity in SLAs and variable standards of service offered by various cloud providers. This research examines weaknesses in service level agreements offered by various cloud providers and impact due to absence of uniform agreed minimum service level framework on the adoption and usage of cloud service. The research is focused around higher education case study and proposes a conceptual model based on uniform minimum service model that acts as benchmark for the industry to ensure quality of service to the cloud users in the higher education institution and remove the barriers to the adoption of cloud technology. The proposed Minimum Service Level (MSL) framework, provides a set of minimum and uniform standards in the key concern areas raised by the participants of HE institution which are essential to the cloud users and provide a minimum quality benchmark that becomes a uniform standard across the industry. The proposed model produces a cloud computing implementation evaluation criteria which is an attempt to reduce the adoption barrier of the cloud technology and set minimum uniform standards followed by all the cloud providers regardless of their hosting location so that their performance can be measured, evaluated and compared across the industry to improve the overall QoS (Quality of Service) received by the cloud users, remove the adoption barriers and concerns of the cloud users and increase the competition across the cloud industry.A computação em nuvem proporciona a computação paralela e emergiu como uma tecnologia eficiente para enfrentar os desafios do crescimento rápido de dados que vivemos na era da Internet. A computação em nuvem é uma tecnologia emergente que oferece serviços baseados em assinatura e oferece diferentes modelos como IaaS, PaaS e SaaS, entre outros modelos para atender as necessidades de diferentes grupos de utilizadores. A tecnologia tem enormes benefícios, mas subsistem sérias preocupações e desafios relacionados com a falta de normas uniformes ou inexistência de um referencial mínimo para o nível de serviços oferecidos, na indústria, para proporcionar uma oferta eficaz, uniforme e confiável para os utilizadores da nuvem. Como a computação em nuvem está a ganhar popularidade, tanto organizações como utilizadores estão enfrentando problemas para adotar o serviço devido à falta de enquadramento de nível de serviço mínimo que possa agir como um ponto de referência na seleção de provedor da nuvem e fornecer a qualidade dos serviços de acordo com as expectativas do utilizador. A situação torna-se mais crítica, devido à natureza distribuída do prestador de serviço, que pode ser oriundo de qualquer parte do mundo. Devido à falta de enquadramento de nível de serviço mínimo que irá agir como um benchmark para fornecer um serviço uniforme em toda a indústria, existem sérias preocupações levantadas recentemente em termos de violações de segurança e privacidade de dados, autenticação e autorização, falta de questões de auditoria de terceiros e problemas de gestão de identidade, integridade, confidencialidade e disponibilidade de dados, falta de uniformidade de normas, a não resposta a incidentes e o monitoramento de padrões, a interoperabilidade e a falta de padrões de portabilidade, questões relacionadas com a gestão de identidade, falta de padrões de serviços de proteção das infraestruturas e fraca governança e conformidade de padrões constituem outras importantes causas de preocupação para os utilizadores. Devido à confusão e ausência de SLAs acordados de modo universal para um modelo de serviço, diferente qualidade de serviços está a ser fornecida através da nuvem, pela indústria da computação em nuvem. Atualmente, não há desempenho uniforme nem um modelo acordado por todas as partes interessadas; que pode fornecer critérios de desempenho para medir, avaliar e comparar o nível de serviços oferecidos por diversos fornecedores de computação em nuvem na indústria. Com a implementação do Regulamento Geral de Protecção de Dados (RGPD) e a procura da nuvem com base no impacto ambiental (Green SLAs), são acrescentadas precupações adicionais e existem sérias implicações para os forncedores de computação em nuvem e para os seus consumidores, também devido à falta de uniformidade na multiplicidade de SLAs e padrões de serviço oferecidos. A presente pesquisa examina as fraquezas em acordos de nível de serviço oferecidos por fornecedores de computação em nuvem e estuda o impacto da ausência de um quadro de nível de serviço mínimo acordado sobre a adoção e o uso no contexto da computação em nuvem. A pesquisa está orientada para a adoção destes serviços para o caso do ensino superior e as instituições de ensino superior e propõe um modelo conceptualt com base em um modelo de serviço mínimo uniforme que funciona como referência para a indústria, para garantir a qualidade do serviço para os utilizadores da nuvem numa instituição de ensino superior de forma a eliminar as barreiras para a adoção da tecnologia de computação em nuvem. O nível de serviço mínimo proposto (MSL), fornece um conjunto mínimo de normas uniformes e na áreas das principais preocupações levantadas por responsáveis de instituições de ensino superior e que são essenciais, de modo a fornecer um referencial mínimo de qualidade, que se possa tornar um padrão uniforme em toda a indústria. O modelo proposto é uma tentativa de reduzir a barreira de adoção da tecnologia de computação em nuvem e definir normas mínimas seguidas por todos os fornecedores de computação em nuvem, independentemente do seu local de hospedagem para que os seus desempenhos possam ser medidos, avaliados e comparados em toda a indústria, para melhorar a qualidade de serviço (QoS) recebida pelos utilizadores e remova as barreiras de adoção e as preocupações dos utilizadores, bem como fomentar o aumento da concorrência em toda a indústria da computação em nuvem

    MSL Framework: (Minimum Service Level Framework) for cloud providers and users

    Get PDF
    Cloud Computing ensures parallel computing and emerged as an efficient technology to meet the challenges of rapid growth of data that we experienced in this Internet age. Cloud computing is an emerging technology that offers subscription based services, and provide different models such as IaaS, PaaS and SaaS among other models to cater the needs of different user groups. The technology has enormous benefits but there are serious concerns and challenges related to lack of uniform standards or nonexistence of minimum benchmark for level of services offered across the industry to provide an effective, uniform and reliable service to the cloud users. As the cloud computing is gaining popularity, organizations and users are having problems to adopt the service ue to lack of minimum service level framework which can act as a benchmark in the selection of the cloud provider and provide quality of service according to the user’s expectations. The situation becomes more critical due to distributed nature of the service provider which can be offering service from any part of the world. Due to lack of minimum service level framework that will act as a benchmark to provide a uniform service across the industry there are serious concerns raised recently interms of security and data privacy breaches, authentication and authorization issues, lack of third party audit and identity management problems, integrity, confidentiality and variable data availability standards, no uniform incident response and monitoring standards, interoperability and lack of portability standards, identity management issues, lack of infrastructure protection services standards and weak governance and compliance standards are major cause of concerns for cloud users. Due to confusion and absence of universal agreed SLAs for a service model, different quality of services is being provided across the cloud industry. Currently there is no uniform performance model agreed by all stakeholders; which can provide performance criteria to measure, evaluate, and benchmark the level of services offered by various cloud providers in the industry. With the implementation of General Data Protection Regulation (GDPR) and demand from cloud users to have Green SLAs that provides better resource allocations mechanism, there will be serious implications for the cloud providers and its consumers due to lack of uniformity in SLAs and variable standards of service offered by various cloud providers. This research examines weaknesses in service level agreements offered by various cloud providers and impact due to absence of uniform agreed minimum service level framework on the adoption and usage of cloud service. The research is focused around higher education case study and proposes a conceptual model based on uniform minimum service model that acts as benchmark for the industry to ensure quality of service to the cloud users in the higher education institution and remove the barriers to the adoption of cloud technology. The proposed Minimum Service Level (MSL) framework, provides a set of minimum and uniform standards in the key concern areas raised by the participants of HE institution which are essential to the cloud users and provide a minimum quality benchmark that becomes a uniform standard across the industry. The proposed model produces a cloud computing implementation evaluation criteria which is an attempt to reduce the adoption barrier of the cloud technology and set minimum uniform standards followed by all the cloud providers regardless of their hosting location so that their performance can be measured, evaluated and compared across the industry to improve the overall QoS (Quality of Service) received by the cloud users, remove the adoption barriers and concerns of the cloud users and increase the competition across the cloud industry.A computação em nuvem proporciona a computação paralela e emergiu como uma tecnologia eficiente para enfrentar os desafios do crescimento rápido de dados que vivemos na era da Internet. A computação em nuvem é uma tecnologia emergente que oferece serviços baseados em assinatura e oferece diferentes modelos como IaaS, PaaS e SaaS, entre outros modelos para atender as necessidades de diferentes grupos de utilizadores. A tecnologia tem enormes benefícios, mas subsistem sérias preocupações e desafios relacionados com a falta de normas uniformes ou inexistência de um referencial mínimo para o nível de serviços oferecidos, na indústria, para proporcionar uma oferta eficaz, uniforme e confiável para os utilizadores da nuvem. Como a computação em nuvem está a ganhar popularidade, tanto organizações como utilizadores estão enfrentando problemas para adotar o serviço devido à falta de enquadramento de nível de serviço mínimo que possa agir como um ponto de referência na seleção de provedor da nuvem e fornecer a qualidade dos serviços de acordo com as expectativas do utilizador. A situação torna-se mais crítica, devido à natureza distribuída do prestador de serviço, que pode ser oriundo de qualquer parte do mundo. Devido à falta de enquadramento de nível de serviço mínimo que irá agir como um benchmark para fornecer um serviço uniforme em toda a indústria, existem sérias preocupações levantadas recentemente em termos de violações de segurança e privacidade de dados, autenticação e autorização, falta de questões de auditoria de terceiros e problemas de gestão de identidade, integridade, confidencialidade e disponibilidade de dados, falta de uniformidade de normas, a não resposta a incidentes e o monitoramento de padrões, a interoperabilidade e a falta de padrões de portabilidade, questões relacionadas com a gestão de identidade, falta de padrões de serviços de proteção das infraestruturas e fraca governança e conformidade de padrões constituem outras importantes causas de preocupação para os utilizadores. Devido à confusão e ausência de SLAs acordados de modo universal para um modelo de serviço, diferente qualidade de serviços está a ser fornecida através da nuvem, pela indústria da computação em nuvem. Atualmente, não há desempenho uniforme nem um modelo acordado por todas as partes interessadas; que pode fornecer critérios de desempenho para medir, avaliar e comparar o nível de serviços oferecidos por diversos fornecedores de computação em nuvem na indústria. Com a implementação do Regulamento Geral de Protecção de Dados (RGPD) e a procura da nuvem com base no impacto ambiental (Green SLAs), são acrescentadas precupações adicionais e existem sérias implicações para os forncedores de computação em nuvem e para os seus consumidores, também devido à falta de uniformidade na multiplicidade de SLAs e padrões de serviço oferecidos. A presente pesquisa examina as fraquezas em acordos de nível de serviço oferecidos por fornecedores de computação em nuvem e estuda o impacto da ausência de um quadro de nível de serviço mínimo acordado sobre a adoção e o uso no contexto da computação em nuvem. A pesquisa está orientada para a adoção destes serviços para o caso do ensino superior e as instituições de ensino superior e propõe um modelo conceptualt com base em um modelo de serviço mínimo uniforme que funciona como referência para a indústria, para garantir a qualidade do serviço para os utilizadores da nuvem numa instituição de ensino superior de forma a eliminar as barreiras para a adoção da tecnologia de computação em nuvem. O nível de serviço mínimo proposto (MSL), fornece um conjunto mínimo de normas uniformes e na áreas das principais preocupações levantadas por responsáveis de instituições de ensino superior e que são essenciais, de modo a fornecer um referencial mínimo de qualidade, que se possa tornar um padrão uniforme em toda a indústria. O modelo proposto é uma tentativa de reduzir a barreira de adoção da tecnologia de computação em nuvem e definir normas mínimas seguidas por todos os fornecedores de computação em nuvem, independentemente do seu local de hospedagem para que os seus desempenhos possam ser medidos, avaliados e comparados em toda a indústria, para melhorar a qualidade de serviço (QoS) recebida pelos utilizadores e remova as barreiras de adoção e as preocupações dos utilizadores, bem como fomentar o aumento da concorrência em toda a indústria da computação em nuvem
    corecore