22,750 research outputs found
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation
FlightGoggles is a photorealistic sensor simulator for perception-driven
robotic vehicles. The key contributions of FlightGoggles are twofold. First,
FlightGoggles provides photorealistic exteroceptive sensor simulation using
graphics assets generated with photogrammetry. Second, it provides the ability
to combine (i) synthetic exteroceptive measurements generated in silico in real
time and (ii) vehicle dynamics and proprioceptive measurements generated in
motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of
simulating a virtual-reality environment around autonomous vehicle(s). While a
vehicle is in flight in the FlightGoggles virtual reality environment,
exteroceptive sensors are rendered synthetically in real time while all complex
extrinsic dynamics are generated organically through the natural interactions
of the vehicle. The FlightGoggles framework allows for researchers to
accelerate development by circumventing the need to estimate complex and
hard-to-model interactions such as aerodynamics, motor mechanics, battery
electrochemistry, and behavior of other agents. The ability to perform
vehicle-in-the-loop experiments with photorealistic exteroceptive sensor
simulation facilitates novel research directions involving, e.g., fast and
agile autonomous flight in obstacle-rich environments, safe human interaction,
and flexible sensor selection. FlightGoggles has been utilized as the main test
for selecting nine teams that will advance in the AlphaPilot autonomous drone
racing challenge. We survey approaches and results from the top AlphaPilot
teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be
found at https://flightgoggles.mit.edu. Revision includes description of new
FlightGoggles features, such as a photogrammetric model of the MIT Stata
Center, new rendering settings, and a Python AP
Machine Learning Algorithms for Provisioning Cloud/Edge Applications
Mención Internacional en el título de doctorReinforcement Learning (RL), in which an agent is trained to make the most
favourable decisions in the long run, is an established technique in artificial intelligence. Its
popularity has increased in the recent past, largely due to the development of deep neural
networks spawning deep reinforcement learning algorithms such as Deep Q-Learning. The
latter have been used to solve previously insurmountable problems, such as playing the
famed game of “Go” that previous algorithms could not. Many such problems suffer the
curse of dimensionality, in which the sheer number of possible states is so overwhelming
that it is impractical to explore every possible option.
While these recent techniques have been successful, they may not be strictly necessary
or practical for some applications such as cloud provisioning. In these situations, the
action space is not as vast and workload data required to train such systems is not
as widely shared, as it is considered commercialy sensitive by the Application Service
Provider (ASP). Given that provisioning decisions evolve over time in sympathy to
incident workloads, they fit into the sequential decision process problem that legacy RL
was designed to solve. However because of the high correlation of time series data, states
are not independent of each other and the legacy Markov Decision Processes (MDPs)
have to be cleverly adapted to create robust provisioning algorithms.
As the first contribution of this thesis, we exploit the knowledge of both the application
and configuration to create an adaptive provisioning system leveraging stationary Markov
distributions. We then develop algorithms that, with neither application nor configuration
knowledge, solve the underlying Markov Decision Process (MDP) to create provisioning
systems. Our Q-Learning algorithms factor in the correlation between states and the
consequent transitions between them to create provisioning systems that do not only
adapt to workloads, but can also exploit similarities between them, thereby reducing
the retraining overhead. Our algorithms also exhibit convergence in fewer learning steps
given that we restructure the state and action spaces to avoid the curse of dimensionality
without the need for the function approximation approach taken by deep Q-Learning
systems.
A crucial use-case of future networks will be the support of low-latency applications
involving highly mobile users. With these in mind, the European Telecommunications Standards Institute (ETSI) has proposed the Multi-access Edge Computing (MEC)
architecture, in which computing capabilities can be located close to the network edge,
where the data is generated. Provisioning for such applications therefore entails migrating
them to the most suitable location on the network edge as the users move. In this thesis,
we also tackle this type of provisioning by considering vehicle platooning or Cooperative
Adaptive Cruise Control (CACC) on the edge. We show that our Q-Learning algorithm
can be adapted to minimize the number of migrations required to effectively run such
an application on MEC hosts, which may also be subject to traffic from other competing
applications.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Antonio Fernández Anta.- Secretario: Diego Perino.- Vocal: Ilenia Tinnirell
Space Warps: I. Crowd-sourcing the Discovery of Gravitational Lenses
We describe Space Warps, a novel gravitational lens discovery service that
yields samples of high purity and completeness through crowd-sourced visual
inspection. Carefully produced colour composite images are displayed to
volunteers via a web- based classification interface, which records their
estimates of the positions of candidate lensed features. Images of simulated
lenses, as well as real images which lack lenses, are inserted into the image
stream at random intervals; this training set is used to give the volunteers
instantaneous feedback on their performance, as well as to calibrate a model of
the system that provides dynamical updates to the probability that a classified
image contains a lens. Low probability systems are retired from the site
periodically, concentrating the sample towards a set of lens candidates. Having
divided 160 square degrees of Canada-France-Hawaii Telescope Legacy Survey
(CFHTLS) imaging into some 430,000 overlapping 82 by 82 arcsecond tiles and
displaying them on the site, we were joined by around 37,000 volunteers who
contributed 11 million image classifications over the course of 8 months. This
Stage 1 search reduced the sample to 3381 images containing candidates; these
were then refined in Stage 2 to yield a sample that we expect to be over 90%
complete and 30% pure, based on our analysis of the volunteers performance on
training images. We comment on the scalability of the SpaceWarps system to the
wide field survey era, based on our projection that searches of 10 images
could be performed by a crowd of 10 volunteers in 6 days.Comment: 21 pages, 13 figures, MNRAS accepted, minor to moderate changes in
this versio
- …