155,164 research outputs found
Quantum network teleportation for quantum information distribution and concentration
We investigate the schemes of quantum network teleportation for quantum
information distribution and concentration which are essential in quantum cloud
computation and quantum internet. In those schemes, the cloud can send
simultaneously identical unknown quantum states to clients located in different
places by a network like teleportation with a prior shared multipartite
entangled state resource. The cloud first perform the quantum operation, each
client can recover their quantum state locally by using the classical
information announced by the cloud about the measurement result. The number of
clients can be beyond the number of identical quantum states intentionally
being sent, this quantum network teleportation can make sure that the retrieved
quantum state is optimal. Furthermore, we present a scheme to realize its
reverse process, which concentrates the states from the clients to reconstruct
the original state of the cloud. These schemes facilitate the quantum
information distribution and concentration in quantum networks in the framework
of quantum cloud computation. Potential applications in time synchronization
are discussed.Comment: 7 pages, 1 figur
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
Optimization of Radio and Computational Resources for Energy Efficiency in Latency-Constrained Application Offloading
Providing femto-access points (FAPs) with computational capabilities will
allow (either total or partial) offloading of highly demanding applications
from smart-phones to the so called femto-cloud. Such offloading promises to be
beneficial in terms of battery saving at the mobile terminal (MT) and/or
latency reduction in the execution of applications, whenever the energy and/or
time required for the communication process are compensated by the energy
and/or time savings that result from the remote computation at the FAPs. For
this problem, we provide in this paper a framework for the joint optimization
of the radio and computational resource usage exploiting the tradeoff between
energy consumption and latency, and assuming that multiple antennas are
available at the MT and the serving FAP. As a result of the optimization, the
optimal communication strategy (e.g., transmission power, rate, precoder) is
obtained, as well as the optimal distribution of the computational load between
the handset and the serving FAP. The paper also establishes the conditions
under which total or no offloading are optimal, determines which is the minimum
affordable latency in the execution of the application, and analyzes as a
particular case the minimization of the total consumed energy without latency
constraints.Comment: Accepted to be published at IEEE Transactions on Vehicular Technology
(acceptance: November 2014
Cloud assisted P2P media streaming for bandwidth constrained mobile subscribers
Multimedia streaming applications have disruptively occupied bandwidth in wire line Internet, yet today's fledging mobile media streaming still poses many challenges in efficient content distribution due to the form of mobile devices. At the same time, cloud computing is gaining power as a promising technology to transform IT industry and many eminent enterprises are developing their own cloud infrastructures. However, the lack of applications hinders clouds' large-scale implementation. In this paper, we envision a cloud-assisted power-efficient mobile P2P media streaming architecture that addresses the weakness of today's wireless access technologies. Clouds are responsible for storage and computing demanding tasks, and mobile devices colocating with each other share bandwidth and cooperatively stream media content to distribute the load. We first model interactions among mobile devices as a coalition game, and then discuss the optimal chunk retrieval scheduling. Finally, we draw on realistic mobile phone data and utilize an ARIMA model for colocation duration prediction among mobile devices. © 2010 IEEE.published_or_final_versio
Scaling social media applications into geo-distributed clouds
TS51: Cloud/Grid computing and networks 3Federation of geo-distributed cloud services is a trend in cloud computing which, by spanning multiple data centers at different geographical locations, can provide a cloud platform with much larger capacities. Such a geo-distributed cloud is ideal for supporting large-scale social media streaming applications (e.g., YouTube-like sites) with dynamic contents and demands, owing to its abundant on-demand storage/bandwidth capacities and geographical proximity to different groups of users. Although promising, its realization presents challenges on how to efficiently store and migrate contents among different cloud sites (i.e. data centers), and to distribute user requests to the appropriate sites for timely responses at modest costs. These challenges escalate when we consider the persistently increasing contents and volatile user behaviors in a social media application. By exploiting social influences among users, this paper proposes efficient proactive algorithms for dynamic, optimal scaling of a social media application in a geo-distributed cloud. Our key contribution is an online content migration and request distribution algorithm with the following features: (1) future demand prediction by novelly characterizing social influences among the users in a simple but effective epidemic model; (2) oneshot optimal content migration and request distribution based on efficient optimization algorithms to address the predicted demand, and (3) a Δ(t)-step look-ahead mechanism to adjust the one-shot optimization results towards the offline optimum. We verify the effectiveness of our algorithm using solid theoretical analysis, as well as large-scale experiments under dynamic realistic settings on a home-built cloud platform. © 2012 IEEE.published_or_final_versionThe 31st Annual IEEE International Conference on Computer Communications (IEEE INFOCOM 2012), Orlando, FL., 25-30 March 2012. In IEEE Infocom Proceedings, 2012, p. 684-69
The Power of Static Pricing for Reusable Resources
We consider the problem of pricing a reusable resource service system.
Potential customers arrive according to a Poisson process and purchase the
service if their valuation exceeds the current price. If no units are
available, customers immediately leave without service. Serving a customer
corresponds to using one unit of the reusable resource, where the service time
has an exponential distribution. The objective is to maximize the steady-state
revenue rate. This system is equivalent to the classical Erlang loss model with
price-sensitive customers, which has applications in vehicle sharing, cloud
computing, and spare parts management.
Although an optimal pricing policy is dynamic, we provide two main results
that show a simple static policy is universally near-optimal for any service
rate, arrival rate, and number of units in the system. When there is one class
of customers who have a monotone hazard rate (MHR) valuation distribution, we
prove that a static pricing policy guarantees 90.4\% of the revenue from the
optimal dynamic policy. When there are multiple classes of customers that each
have their own regular valuation distribution and service rate, we prove that
static pricing guarantees 78.9\% of the revenue of the optimal dynamic policy.
In this case, the optimal pricing policy is exponentially large in the number
of classes while the static policy requires only one price per class. Moreover,
we prove that the optimal static policy can be easily computed, resulting in
the first polynomial time approximation algorithm for this problem
SGA Model for Prediction in Cloud Environment
With virtual information, cloud computing has made applications available to users everywhere. Efficient asset workload forecasting could help the cloud achieve maximum resource utilisation. The effective utilization of resources and the reduction of datacentres power both depend heavily on load forecasting. The allocation of resources and task scheduling issues in clouds and virtualized systems are significantly impacted by CPU utilisation forecast. A resource manager uses utilisation projection to distribute workload between physical nodes, improving resource consumption effectiveness. When performing a virtual machine distribution job, a good estimation of CPU utilization enables the migration of one or more virtual servers, preventing the overflow of the real machineries. In a cloud system, scalability and flexibility are crucial characteristics. Predicting workload and demands would aid in optimal resource utilisation in a cloud setting. To improve allocation of resources and the effectiveness of the cloud service, workload assessment and future workload forecasting could be performed. The creation of an appropriate statistical method has begun. In this study, a simulation approach and a genetic algorithm were used to forecast workloads. In comparison to the earlier techniques, it is anticipated to produce results that are superior by having a lower error rate and higher forecasting reliability. The suggested method is examined utilizing statistics from the Bit brains datacentres. The study then analyses, summarises, and suggests future study paths in cloud environments
- …