1,021 research outputs found
A Process Framework for Managing Quality of Service in Private Cloud
As information systems leaders tap into the global market of cloud computing-based services, they struggle to maintain consistent application performance due to lack of a process framework for managing quality of service (QoS) in the cloud. Guided by the disruptive innovation theory, the purpose of this case study was to identify a process framework for meeting the QoS requirements of private cloud service users. Private cloud implementation was explored by selecting an organization in California through purposeful sampling. Information was gathered by interviewing 23 information technology (IT) professionals, a mix of frontline engineers, managers, and leaders involved in the implementation of private cloud. Another source of data was documents such as standard operating procedures, policies, and guidelines related to private cloud implementation. Interview transcripts and documents were coded and sequentially analyzed. Three prominent themes emerged from the analysis of data: (a) end user expectations, (b) application architecture, and (c) trending analysis. The findings of this study may help IT leaders in effectively managing QoS in cloud infrastructure and deliver reliable application performance that may help in increasing customer population and profitability of organizations. This study may contribute to positive social change as information systems managers and workers can learn and apply the process framework for delivering stable and reliable cloud-hosted computer applications
Towards lightweight, low-latency network function virtualisation at the network edge
Communication networks are witnessing a dramatic growth in the number of connected mobile devices, sensors and the Internet of Everything (IoE) equipment, which have been estimated to exceed 50 billion by 2020, generating zettabytes of traffic each year. In addition, networks are stressed to serve the increased capabilities of the mobile devices (e.g., HD cameras) and to fulfil the users' desire for always-on, multimedia-oriented, and low-latency connectivity.
To cope with these challenges, service providers are exploiting softwarised, cost-effective, and flexible service provisioning, known as Network Function Virtualisation (NFV). At the same time, future networks are aiming to push services to the edge of the network, to close physical proximity from the users, which has the potential to reduce end-to-end latency, while increasing the flexibility and agility of allocating resources. However, the heavy footprint of today's NFV platforms and their lack of dynamic, latency-optimal orchestration prevents them from being used at the edge of the network.
In this thesis, the opportunities of bringing NFV to the network edge are identified. As a concrete solution, the thesis presents Glasgow Network Functions (GNF), a container-based NFV framework that allocates and dynamically orchestrates lightweight virtual network functions (vNFs) at the edge of the network, providing low-latency network services (e.g., security functions or content caches) to users. The thesis presents a powerful formalisation for the latency-optimal placement of edge vNFs and provides an exact solution using Integer Linear Programming, along with a placement scheduler that relies on Optimal Stopping Theory to efficiently re-calculate the placement following roaming users and temporal changes in latency characteristics.
The results of this work demonstrate that GNF's real-world vNF examples can be created and hosted on a variety of hosting devices, including VMs from public clouds and low-cost edge devices typically found at the customer's premises. The results also show that GNF can carefully manage the placement of vNFs to provide low-latency guarantees, while minimising the number of vNF migrations required by the operators to keep the placement latency-optimal
Exploring traffic and QoS management mechanisms to support mobile cloud computing using service localisation in heterogeneous environments
In recent years, mobile devices have evolved to support an amalgam of multimedia applications and content. However, the small size of these devices poses a limit the amount of local computing resources. The emergence of Cloud technology has set the ground for an era of task offloading for mobile devices and we are now seeing the deployment of applications that make more extensive use of Cloud processing as a means of augmenting the capabilities of mobiles. Mobile Cloud Computing is the term used to describe the convergence of these technologies towards applications and mechanisms that offload tasks from mobile devices to the Cloud.
In order for mobile devices to access Cloud resources and successfully offload tasks there, a solution for constant and reliable connectivity is required. The proliferation of wireless technology ensures that networks are available almost everywhere in an urban environment and mobile devices can stay connected to a network at all times. However, user mobility is often the cause of intermittent connectivity that affects the performance of applications and ultimately degrades the user experience. 5th Generation Networks are introducing mechanisms that enable constant and reliable connectivity through seamless handovers between networks and provide the foundation for a tighter coupling between Cloud resources and mobiles.
This convergence of technologies creates new challenges in the areas of traffic management and QoS provisioning. The constant connectivity to and reliance of mobile devices on Cloud resources have the potential of creating large traffic flows between networks. Furthermore, depending on the type of application generating the traffic flow, very strict QoS may be required from the networks as suboptimal performance may severely degrade an application’s functionality.
In this thesis, I propose a new service delivery framework, centred on the convergence of Mobile Cloud Computing and 5G networks for the purpose of optimising service delivery in a mobile environment. The framework is used as a guideline for identifying different aspects of service delivery in a mobile environment and for providing a path for future research in this field. The focus of the thesis is placed on the service delivery mechanisms that are responsible for optimising the QoS and managing network traffic.
I present a solution for managing traffic through dynamic service localisation according to user mobility and device connectivity. I implement a prototype of the solution in a virtualised environment as a proof of concept and demonstrate the functionality and results gathered from experimentation.
Finally, I present a new approach to modelling network performance by taking into account user mobility. The model considers the overall performance of a persistent connection as the mobile node switches between different networks. Results from the model can be used to determine which networks will negatively affect application performance and what impact they will have for the duration of the user's movement. The proposed model is evaluated using an analytical approac
Unified Management of Applications on Heterogeneous Clouds
La diversidad con la que los proveedores cloud ofrecen sus servicios, definiendo sus propias interfaces y acuerdos de calidad y de uso, dificulta la portabilidad y la interoperabilidad entre proveedores, lo que incurre en el problema conocido como el bloqueo del vendedor. Dada la heterogeneidad que existe entre los distintos niveles de abstracción del cloud, como IaaS y PaaS, hace que desarrollar aplicaciones agnósticas que sean independientes de los proveedores y los servicios en los que se van a desplegar sea aún un desafÃo. Esto también limita la posibilidad de migrar los componentes de aplicaciones cloud en ejecución a nuevos proveedores. Esta falta de homogeneidad también dificulta el desarrollo de procesos para operar las aplicaciones que sean robustos ante los errores que pueden ocurrir en los distintos proveedores y niveles de abstracción. Como resultado, las aplicaciones pueden quedar ligadas a los proveedores para las que fueron diseñadas, limitando la capacidad de los desarrolladores para reaccionar ante cambios en los proveedores o en las propias aplicaciones. En esta tesis se define trans-cloud como una nueva dimensión que unifica la gestión de distintos proveedores y niveles de servicios, IaaS y PaaS, bajo una misma API y hace uso del estándar TOSCA para describir aplicaciones agnósticas y portables, teniendo procesos automatizados, por ejemplo para el despliegue. Por otro lado, haciendo uso de las topologÃas estructuradas de TOSCA, trans-cloud propone un algoritmo genérico para la migración de componentes de aplicaciones en ejecución. Además, trans-cloud unifica la gestión de los errores, permitiendo tener procesos robustos y agnósticos para gestionar el ciclo de vida de las aplicaciones, independientemente de los proveedores y niveles de servicio donde se estén ejecutando. Por último, se presentan los casos de uso y los resultados de los experimentos usados para validar cada una de estas propuestas
Perspective Chapter: Cloud Lock-in Parameters – Service Adoption and Migration
ICT has been lauded as being revolutionised by cloud computing, which relieves businesses of having to make significant capital investments in ICT while allowing them to connect to incredibly potent computing capabilities over the network. Organisations adopt cloud computing as a way to solve business problems, not technical problems. As such, organisations across Europe are eagerly embracing cloud computing in their operating environments. Understanding cloud lock-in parameters is essential for supporting inter-cloud cooperation and seamless information and data exchange. Achieving vendor-neutral cloud services is a fundamental requirement and a necessary strategy to be fulfilled in order to enable portability. This chapter highlights technical advancements that contribute to the interoperable migration of services in the heterogeneous cloud environment. A set of guidelines and good practices were also collected and discussed, thus providing strategies on how lock-in can be mitigated. Moreover, this chapter provides some recommendations for moving forward with cloud computing adoption. To make sure the migration and integration between on-premise and cloud happen with minimal disruption to business and results in maximum sustainable cost benefit, the chapter’s contribution is also designed to provide new knowledge and greater depth to support organisations around the world to make informed decisions
Recommended from our members
QoS within Business Grid Quality of Service (BGQoS)
Differences in domain QoS requirements have been an obstacle to utilising Grid Computing for main stream applications. While the resource could potentially provide potentially vital services as well as providing significant computing and storage capabilities, the lack of high level QoS specification capabilities has proven to be a hindrance. Business Grid Quality of Service (BGQoS) is a QoS model for business-oriented applications on Grid computing systems. BGQoS defines QoS at a high level facilitating an easier request model for the Grid Resource Consumer (GRC) and eliminates confusion for the Grid Resource Provider in supplying the appropriate resources to meet the GRC requirements. It offers high level QoS specification within multi-domain environments in a flexible manner. Employing component separation and dynamic QoS calculation, it provides the necessary tools and execution environment for a scalable set of requirements tailoring to specific domain demands and requirements. Moreover, through reallocation, the model provides the insurance that all QoS requirements are met throughout the execution period, including migrating tasks to different resources if necessary. This process is not random and adheres to a set of conditions which ensures that task execution and resource allocation happen when and in accordance with execution requirements. This paper focuses on BGQoS’ flexibility and QoS capability. More specifically, the concentration is on core operations within BGQoS and the methods used in order to deliver a sustained level of QoS which meets the GRC’s requirements while being versatile and flexible such that it can be tailored to specific domains. This paper also presents an experimental evaluation of BGQoS. The evaluation investigates the behaviour and performance of the separate operations and components within BGQoS, and moreover, it presents an investigation and comparison between the different operations and their effect on the full model
Multi-Cloud Information Security Policy Development
Organizations’ ever lasting desire to utilize new trending technologies for optimizing their businesses have been increasing by the years. Cloud computing has been around for a while, and for many became a vital part of their day-to-day operations. The concept of multi-cloud has allowed organizations to take advantage of every cloud vendor’s best services, hinder vendor lock-in, resulting in cost optimization, and resulting in more available services. With every new technology, there are new vulnerabilities ready to be exploited at any time. As there is little prior research regarding this field, threat actors can exploit an organization’s ignorance on important challenges such as interoperability issues, implementing multiple vendors resulting in losing track of their services, and the lack of expertise in this newly founded field. To alleviate such issues, one approach could be to develop information security policies, hence our research question for the thesis: How to develop information security policies in a multi-cloud environment with considerations of the unique challenges it offers?
To uncover the research question, we have conducted a systematic literature review followed up by a qualitative research approach. This has resulted in six semi-structured interviews from respondents with a variety of experience within the multi-cloud realm. The most prominent findings from this exploratory study has been the focus of thoroughly planning the need of a multi-cloud and information security policies, as well as applying a top-down approach for the policy development phase. This gives a more holistic view over the process, and additionally having the right competence is important. An interesting finding was that multi-cloud on paper should prevent the vendor lock-in issue, but in reality may provoke the matter. Using the tools and services provided by the cloud service providers may enhance the development of information security policies, but proves to be difficult in multi-cloud as the problem of interoperability hinders this. Lastly, reviewing policies becomes more timeconsuming and resource heavy in a multi-cloud because of the frequent updates and changes in technology, which has to be monitored. This research presents a conceptual framework, which by no means is a one-size-fits-all solution, but raises discussion for future work in this field
Recommended from our members
Elastic Resource Management in Distributed Clouds
The ubiquitous nature of computing devices and their increasing reliance on remote resources have driven and shaped public cloud platforms into unprecedented large-scale, distributed data centers. Concurrently, a plethora of cloud-based applications are experiencing multi-dimensional workload dynamics---workload volumes that vary along both time and space axes and with higher frequency.
The interplay of diverse workload characteristics and distributed clouds raises several key challenges for efficiently and dynamically managing server resources. First, current cloud platforms impose certain restrictions that might hinder some resource management tasks. Second, an application-agnostic approach might not entail appropriate performance goals, therefore, requires numerous specific methods. Third, provisioning resources outside LAN boundary might incur huge delay which would impact the desired agility.
In this dissertation, I investigate the above challenges and present the design of automated systems that manage resources for various applications in distributed clouds. The intermediate goal of these automated systems is to fully exploit potential benefits such as reduced network latency offered by increasingly distributed server resources. The ultimate goal is to improve end-to-end user response time with novel resource management approaches, within a certain cost budget.
Centered around these two goals, I first investigate how to optimize the location and performance of virtual machines in distributed clouds. I use virtual desktops, mostly serving a single user, as an example use case for developing a black-box approach that ranks virtual machines based on their dynamic latency requirements. Those with high latency sensitivities have a higher priority of being placed or migrated to a cloud location closest to their users. Next, I relax the assumption of well-provisioned virtual machines and look at how to provision enough resources for applications that exhibit both temporal and spatial workload fluctuations. I propose an application-agnostic queueing model that captures the resource utilization and server response time. Building upon this model, I present a geo-elastic provisioning approach---referred as geo-elasticity---for replicable multi-tier applications that can spin up an appropriate amount of server resources in any cloud locations. Last, I explore the benefits of providing geo-elasticity for database clouds, a popular platform for hosting application backends. Performing geo-elastic provisioning for backend database servers entails several challenges that are specific to database workload, and therefore requires tailored solutions. In addition, cloud platforms offer resources at various prices for different locations. Towards this end, I propose a cost-aware geo-elasticity that combines a regression-based workload model and a queueing network capacity model for database clouds.
In summary, hosting a diverse set of applications in an increasingly distributed cloud makes it interesting and necessary to develop new, efficient and dynamic resource management approaches
Multi-Cloud Information Security Policy Development
Organizations’ ever lasting desire to utilize new trending technologies for optimizing their
businesses have been increasing by the years. Cloud computing has been around for a while,
and for many became a vital part of their day-to-day operations. The concept of multi-cloud
has allowed organizations to take advantage of every cloud vendor’s best services, hinder
vendor lock-in, resulting in cost optimization, and resulting in more available services. With
every new technology, there are new vulnerabilities ready to be exploited at any time. As
there is little prior research regarding this field, threat actors can exploit an organization’s
ignorance on important challenges such as interoperability issues, implementing multiple
vendors resulting in losing track of their services, and the lack of expertise in this newly
founded field. To alleviate such issues, one approach could be to develop information security
policies, hence our research question for the thesis: How to develop information security
policies in a multi-cloud environment with considerations of the unique challenges it offers?
To uncover the research question, we have conducted a systematic literature review followed
up by a qualitative research approach. This has resulted in six semi-structured interviews
from respondents with a variety of experience within the multi-cloud realm. The most
prominent findings from this exploratory study has been the focus of thoroughly planning
the need of a multi-cloud and information security policies, as well as applying a top-down
approach for the policy development phase. This gives a more holistic view over the process,
and additionally having the right competence is important. An interesting finding was that
multi-cloud on paper should prevent the vendor lock-in issue, but in reality may provoke the
matter. Using the tools and services provided by the cloud service providers may enhance
the development of information security policies, but proves to be difficult in multi-cloud as
the problem of interoperability hinders this. Lastly, reviewing policies becomes more timeconsuming
and resource heavy in a multi-cloud because of the frequent updates and changes
in technology, which has to be monitored. This research presents a conceptual framework,
which by no means is a one-size-fits-all solution, but raises discussion for future work in this
field
- …