1,021 research outputs found

    A Process Framework for Managing Quality of Service in Private Cloud

    Get PDF
    As information systems leaders tap into the global market of cloud computing-based services, they struggle to maintain consistent application performance due to lack of a process framework for managing quality of service (QoS) in the cloud. Guided by the disruptive innovation theory, the purpose of this case study was to identify a process framework for meeting the QoS requirements of private cloud service users. Private cloud implementation was explored by selecting an organization in California through purposeful sampling. Information was gathered by interviewing 23 information technology (IT) professionals, a mix of frontline engineers, managers, and leaders involved in the implementation of private cloud. Another source of data was documents such as standard operating procedures, policies, and guidelines related to private cloud implementation. Interview transcripts and documents were coded and sequentially analyzed. Three prominent themes emerged from the analysis of data: (a) end user expectations, (b) application architecture, and (c) trending analysis. The findings of this study may help IT leaders in effectively managing QoS in cloud infrastructure and deliver reliable application performance that may help in increasing customer population and profitability of organizations. This study may contribute to positive social change as information systems managers and workers can learn and apply the process framework for delivering stable and reliable cloud-hosted computer applications

    Towards lightweight, low-latency network function virtualisation at the network edge

    Get PDF
    Communication networks are witnessing a dramatic growth in the number of connected mobile devices, sensors and the Internet of Everything (IoE) equipment, which have been estimated to exceed 50 billion by 2020, generating zettabytes of traffic each year. In addition, networks are stressed to serve the increased capabilities of the mobile devices (e.g., HD cameras) and to fulfil the users' desire for always-on, multimedia-oriented, and low-latency connectivity. To cope with these challenges, service providers are exploiting softwarised, cost-effective, and flexible service provisioning, known as Network Function Virtualisation (NFV). At the same time, future networks are aiming to push services to the edge of the network, to close physical proximity from the users, which has the potential to reduce end-to-end latency, while increasing the flexibility and agility of allocating resources. However, the heavy footprint of today's NFV platforms and their lack of dynamic, latency-optimal orchestration prevents them from being used at the edge of the network. In this thesis, the opportunities of bringing NFV to the network edge are identified. As a concrete solution, the thesis presents Glasgow Network Functions (GNF), a container-based NFV framework that allocates and dynamically orchestrates lightweight virtual network functions (vNFs) at the edge of the network, providing low-latency network services (e.g., security functions or content caches) to users. The thesis presents a powerful formalisation for the latency-optimal placement of edge vNFs and provides an exact solution using Integer Linear Programming, along with a placement scheduler that relies on Optimal Stopping Theory to efficiently re-calculate the placement following roaming users and temporal changes in latency characteristics. The results of this work demonstrate that GNF's real-world vNF examples can be created and hosted on a variety of hosting devices, including VMs from public clouds and low-cost edge devices typically found at the customer's premises. The results also show that GNF can carefully manage the placement of vNFs to provide low-latency guarantees, while minimising the number of vNF migrations required by the operators to keep the placement latency-optimal

    Exploring traffic and QoS management mechanisms to support mobile cloud computing using service localisation in heterogeneous environments

    Get PDF
    In recent years, mobile devices have evolved to support an amalgam of multimedia applications and content. However, the small size of these devices poses a limit the amount of local computing resources. The emergence of Cloud technology has set the ground for an era of task offloading for mobile devices and we are now seeing the deployment of applications that make more extensive use of Cloud processing as a means of augmenting the capabilities of mobiles. Mobile Cloud Computing is the term used to describe the convergence of these technologies towards applications and mechanisms that offload tasks from mobile devices to the Cloud. In order for mobile devices to access Cloud resources and successfully offload tasks there, a solution for constant and reliable connectivity is required. The proliferation of wireless technology ensures that networks are available almost everywhere in an urban environment and mobile devices can stay connected to a network at all times. However, user mobility is often the cause of intermittent connectivity that affects the performance of applications and ultimately degrades the user experience. 5th Generation Networks are introducing mechanisms that enable constant and reliable connectivity through seamless handovers between networks and provide the foundation for a tighter coupling between Cloud resources and mobiles. This convergence of technologies creates new challenges in the areas of traffic management and QoS provisioning. The constant connectivity to and reliance of mobile devices on Cloud resources have the potential of creating large traffic flows between networks. Furthermore, depending on the type of application generating the traffic flow, very strict QoS may be required from the networks as suboptimal performance may severely degrade an application’s functionality. In this thesis, I propose a new service delivery framework, centred on the convergence of Mobile Cloud Computing and 5G networks for the purpose of optimising service delivery in a mobile environment. The framework is used as a guideline for identifying different aspects of service delivery in a mobile environment and for providing a path for future research in this field. The focus of the thesis is placed on the service delivery mechanisms that are responsible for optimising the QoS and managing network traffic. I present a solution for managing traffic through dynamic service localisation according to user mobility and device connectivity. I implement a prototype of the solution in a virtualised environment as a proof of concept and demonstrate the functionality and results gathered from experimentation. Finally, I present a new approach to modelling network performance by taking into account user mobility. The model considers the overall performance of a persistent connection as the mobile node switches between different networks. Results from the model can be used to determine which networks will negatively affect application performance and what impact they will have for the duration of the user's movement. The proposed model is evaluated using an analytical approac

    Unified Management of Applications on Heterogeneous Clouds

    Get PDF
    La diversidad con la que los proveedores cloud ofrecen sus servicios, definiendo sus propias interfaces y acuerdos de calidad y de uso, dificulta la portabilidad y la interoperabilidad entre proveedores, lo que incurre en el problema conocido como el bloqueo del vendedor. Dada la heterogeneidad que existe entre los distintos niveles de abstracción del cloud, como IaaS y PaaS, hace que desarrollar aplicaciones agnósticas que sean independientes de los proveedores y los servicios en los que se van a desplegar sea aún un desafío. Esto también limita la posibilidad de migrar los componentes de aplicaciones cloud en ejecución a nuevos proveedores. Esta falta de homogeneidad también dificulta el desarrollo de procesos para operar las aplicaciones que sean robustos ante los errores que pueden ocurrir en los distintos proveedores y niveles de abstracción. Como resultado, las aplicaciones pueden quedar ligadas a los proveedores para las que fueron diseñadas, limitando la capacidad de los desarrolladores para reaccionar ante cambios en los proveedores o en las propias aplicaciones. En esta tesis se define trans-cloud como una nueva dimensión que unifica la gestión de distintos proveedores y niveles de servicios, IaaS y PaaS, bajo una misma API y hace uso del estándar TOSCA para describir aplicaciones agnósticas y portables, teniendo procesos automatizados, por ejemplo para el despliegue. Por otro lado, haciendo uso de las topologías estructuradas de TOSCA, trans-cloud propone un algoritmo genérico para la migración de componentes de aplicaciones en ejecución. Además, trans-cloud unifica la gestión de los errores, permitiendo tener procesos robustos y agnósticos para gestionar el ciclo de vida de las aplicaciones, independientemente de los proveedores y niveles de servicio donde se estén ejecutando. Por último, se presentan los casos de uso y los resultados de los experimentos usados para validar cada una de estas propuestas

    Perspective Chapter: Cloud Lock-in Parameters – Service Adoption and Migration

    Get PDF
    ICT has been lauded as being revolutionised by cloud computing, which relieves businesses of having to make significant capital investments in ICT while allowing them to connect to incredibly potent computing capabilities over the network. Organisations adopt cloud computing as a way to solve business problems, not technical problems. As such, organisations across Europe are eagerly embracing cloud computing in their operating environments. Understanding cloud lock-in parameters is essential for supporting inter-cloud cooperation and seamless information and data exchange. Achieving vendor-neutral cloud services is a fundamental requirement and a necessary strategy to be fulfilled in order to enable portability. This chapter highlights technical advancements that contribute to the interoperable migration of services in the heterogeneous cloud environment. A set of guidelines and good practices were also collected and discussed, thus providing strategies on how lock-in can be mitigated. Moreover, this chapter provides some recommendations for moving forward with cloud computing adoption. To make sure the migration and integration between on-premise and cloud happen with minimal disruption to business and results in maximum sustainable cost benefit, the chapter’s contribution is also designed to provide new knowledge and greater depth to support organisations around the world to make informed decisions

    Multi-Cloud Information Security Policy Development

    Get PDF
    Organizations’ ever lasting desire to utilize new trending technologies for optimizing their businesses have been increasing by the years. Cloud computing has been around for a while, and for many became a vital part of their day-to-day operations. The concept of multi-cloud has allowed organizations to take advantage of every cloud vendor’s best services, hinder vendor lock-in, resulting in cost optimization, and resulting in more available services. With every new technology, there are new vulnerabilities ready to be exploited at any time. As there is little prior research regarding this field, threat actors can exploit an organization’s ignorance on important challenges such as interoperability issues, implementing multiple vendors resulting in losing track of their services, and the lack of expertise in this newly founded field. To alleviate such issues, one approach could be to develop information security policies, hence our research question for the thesis: How to develop information security policies in a multi-cloud environment with considerations of the unique challenges it offers? To uncover the research question, we have conducted a systematic literature review followed up by a qualitative research approach. This has resulted in six semi-structured interviews from respondents with a variety of experience within the multi-cloud realm. The most prominent findings from this exploratory study has been the focus of thoroughly planning the need of a multi-cloud and information security policies, as well as applying a top-down approach for the policy development phase. This gives a more holistic view over the process, and additionally having the right competence is important. An interesting finding was that multi-cloud on paper should prevent the vendor lock-in issue, but in reality may provoke the matter. Using the tools and services provided by the cloud service providers may enhance the development of information security policies, but proves to be difficult in multi-cloud as the problem of interoperability hinders this. Lastly, reviewing policies becomes more timeconsuming and resource heavy in a multi-cloud because of the frequent updates and changes in technology, which has to be monitored. This research presents a conceptual framework, which by no means is a one-size-fits-all solution, but raises discussion for future work in this field

    Multi-Cloud Information Security Policy Development

    Get PDF
    Organizations’ ever lasting desire to utilize new trending technologies for optimizing their businesses have been increasing by the years. Cloud computing has been around for a while, and for many became a vital part of their day-to-day operations. The concept of multi-cloud has allowed organizations to take advantage of every cloud vendor’s best services, hinder vendor lock-in, resulting in cost optimization, and resulting in more available services. With every new technology, there are new vulnerabilities ready to be exploited at any time. As there is little prior research regarding this field, threat actors can exploit an organization’s ignorance on important challenges such as interoperability issues, implementing multiple vendors resulting in losing track of their services, and the lack of expertise in this newly founded field. To alleviate such issues, one approach could be to develop information security policies, hence our research question for the thesis: How to develop information security policies in a multi-cloud environment with considerations of the unique challenges it offers? To uncover the research question, we have conducted a systematic literature review followed up by a qualitative research approach. This has resulted in six semi-structured interviews from respondents with a variety of experience within the multi-cloud realm. The most prominent findings from this exploratory study has been the focus of thoroughly planning the need of a multi-cloud and information security policies, as well as applying a top-down approach for the policy development phase. This gives a more holistic view over the process, and additionally having the right competence is important. An interesting finding was that multi-cloud on paper should prevent the vendor lock-in issue, but in reality may provoke the matter. Using the tools and services provided by the cloud service providers may enhance the development of information security policies, but proves to be difficult in multi-cloud as the problem of interoperability hinders this. Lastly, reviewing policies becomes more timeconsuming and resource heavy in a multi-cloud because of the frequent updates and changes in technology, which has to be monitored. This research presents a conceptual framework, which by no means is a one-size-fits-all solution, but raises discussion for future work in this field
    • …
    corecore