4,176 research outputs found
Practical service placement approach for microservices architecture
Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. To reduce the complexity of service deployment, community micro-clouds have recently emerged as a promising enabler for the delivery of cloud services to community users. By putting services closer to consumers, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of the services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, it requires of aPeer ReviewedPostprint (author's final draft
TechNews digests: Jan - Nov 2009
TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month
State of The Art and Hot Aspects in Cloud Data Storage Security
Along with the evolution of cloud computing and cloud storage towards matu-
rity, researchers have analyzed an increasing range of cloud computing security
aspects, data security being an important topic in this area. In this paper, we
examine the state of the art in cloud storage security through an overview of
selected peer reviewed publications. We address the question of defining cloud
storage security and its different aspects, as well as enumerate the main vec-
tors of attack on cloud storage. The reviewed papers present techniques for key
management and controlled disclosure of encrypted data in cloud storage, while
novel ideas regarding secure operations on encrypted data and methods for pro-
tection of data in fully virtualized environments provide a glimpse of the toolbox
available for securing cloud storage. Finally, new challenges such as emergent
government regulation call for solutions to problems that did not receive enough
attention in earlier stages of cloud computing, such as for example geographical
location of data. The methods presented in the papers selected for this review
represent only a small fraction of the wide research effort within cloud storage
security. Nevertheless, they serve as an indication of the diversity of problems
that are being addressed
Implementing and evaluating an ICON orchestrator
The cloud computing paradigm has risen, during the last 20 years, to the task of bringing
powerful computational services to the masses. Centralizing the computer hardware to a few
large data centers has brought large monetary savings, but at the cost of a greater geographical
distance between the server and the client. As a new generation of thin clients have emerged,
e.g. smartphones and IoT-devices, the larger latencies induced by these greater distances,
can limit the applications that could benefit from using the vast resources available in cloud
computing. Not long after the explosive growth of cloud computing, a new paradigm, edge
computing has risen. Edge computing aims at bringing the resources generally found in cloud
computing closer to the edge where many of the end-users, clients and data producers reside.
In this thesis, I will present the edge computing concept as well as the technologies enabling
it. Furthermore I will show a few edge computing concepts and architectures, including multi-
access edge computing (MEC), Fog computing and intelligent containers (ICON). Finally, I
will also present a new edge-orchestrator, the ICON Python Orchestrator (IPO), that enables
intelligent containers to migrate closer to the users.
The ICON Python orchestrator tests the feasibility of the ICON concept and provides per-
formance measurements that can be compared to other contemporary edge computing im-
plementations. In this thesis, I will present the IPO architecture design including challenges
encountered during the implementation phase and solutions to specific problems. I will also
show the testing and validation setup. By using the artificial testing and validation network,
client migration speeds were measured using three different cases - redirection, cache hot ICON
migration and cache cold ICON migration. While there is room for improvements, the migration
speeds measured are on par with other edge computing implementations
Implementation and Evaluation of Mobile-Edge Computing Cooperative Caching
Recent expanding rise of mobile device users for cloud services leads to resource challenges in Mobile Network Operator's (MNO) network. This poses significant additional costs to MNOs and also results in poor user experience. Studies illustrate that large amount of traffic consumption in MNO's network is originated from the similar requests of users for the same popular contents over Internet. Therefore such networks suffer from delivering the same content multiple times through their connected gateways to the Internet backhaul. On the other hand, in content delivery networks (CDN), the delay caused by network latency is one of the biggest issues which impedes the efficient delivery and desirable user experience.
Cooperative caching is one of the ways to handle the extra posed traffic by requesting popular contents repeatedly in MNO's network. Furthermore Mobile-Edge Computing (MEC) offers a resource rich environment and data locality to cloud applications. This helps to reduce the network latency time in CDN services. Thus in this Thesis an aggregation between Cooperative Caching and MEC concept has been considered.
This Thesis demonstrates a design, implementation and evaluation for a Mobile-Edge computing Cooperative Caching system to deliver content to mobile users. A design is presented in a failure resilient and scalable practice using a light-weight synchronizing method. The system is implemented and deployed on Nokia Networks Radio Application Cloud Servers(Nokia Networks RACS) as intelligent MEC base-stations and finally the outcome of the system and the effect on bandwidth saving, CDN delay and user experience are evaluated
Recommended from our members
Elastic Resource Management in Distributed Clouds
The ubiquitous nature of computing devices and their increasing reliance on remote resources have driven and shaped public cloud platforms into unprecedented large-scale, distributed data centers. Concurrently, a plethora of cloud-based applications are experiencing multi-dimensional workload dynamics---workload volumes that vary along both time and space axes and with higher frequency.
The interplay of diverse workload characteristics and distributed clouds raises several key challenges for efficiently and dynamically managing server resources. First, current cloud platforms impose certain restrictions that might hinder some resource management tasks. Second, an application-agnostic approach might not entail appropriate performance goals, therefore, requires numerous specific methods. Third, provisioning resources outside LAN boundary might incur huge delay which would impact the desired agility.
In this dissertation, I investigate the above challenges and present the design of automated systems that manage resources for various applications in distributed clouds. The intermediate goal of these automated systems is to fully exploit potential benefits such as reduced network latency offered by increasingly distributed server resources. The ultimate goal is to improve end-to-end user response time with novel resource management approaches, within a certain cost budget.
Centered around these two goals, I first investigate how to optimize the location and performance of virtual machines in distributed clouds. I use virtual desktops, mostly serving a single user, as an example use case for developing a black-box approach that ranks virtual machines based on their dynamic latency requirements. Those with high latency sensitivities have a higher priority of being placed or migrated to a cloud location closest to their users. Next, I relax the assumption of well-provisioned virtual machines and look at how to provision enough resources for applications that exhibit both temporal and spatial workload fluctuations. I propose an application-agnostic queueing model that captures the resource utilization and server response time. Building upon this model, I present a geo-elastic provisioning approach---referred as geo-elasticity---for replicable multi-tier applications that can spin up an appropriate amount of server resources in any cloud locations. Last, I explore the benefits of providing geo-elasticity for database clouds, a popular platform for hosting application backends. Performing geo-elastic provisioning for backend database servers entails several challenges that are specific to database workload, and therefore requires tailored solutions. In addition, cloud platforms offer resources at various prices for different locations. Towards this end, I propose a cost-aware geo-elasticity that combines a regression-based workload model and a queueing network capacity model for database clouds.
In summary, hosting a diverse set of applications in an increasingly distributed cloud makes it interesting and necessary to develop new, efficient and dynamic resource management approaches
- …