39,048 research outputs found
In Broker We Trust: A Double-auction Approach for Resource Allocation in NFV Markets
Network function virtualization (NFV) is an emerging scheme to provide virtualized network function services for next-generation networks. However, finding an efficient way to distribute different resources to customers is difficult. In this paper, we develop a new double-auction approach named DARA that is used for both service function chain routing and NFV price adjustment to maximize the profits of all participants. To the best of our knowledge, this is the first work to adopt a double-auction strategy in this area. The objective of the proposed approach is to maximize the profits of three types of participants: 1) NFV broker; 2) customers; and 3) service providers. Moreover, we prove that the approach is a weakly dominant strategy in a given NFV market by finding the Bayesian Nash equilibrium in the double-auction game. Finally, according to the results of the performance evaluation, our approach outperforms the single-auction mechanism with higher profits for the three types of participants in the given NFV market
Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies
Grid is an infrastructure that involves the integrated and collaborative use
of computers, networks, databases and scientific instruments owned and managed
by multiple organizations. Grid applications often involve large amounts of
data and/or computing resources that require secure resource sharing across
organizational boundaries. This makes Grid application management and
deployment a complex undertaking. Grid middlewares provide users with seamless
computing ability and uniform access to resources in the heterogeneous Grid
environment. Several software toolkits and systems have been developed, most of
which are results of academic research projects, all over the world. This
chapter will focus on four of these middlewares--UNICORE, Globus, Legion and
Gridbus. It also presents our implementation of a resource broker for UNICORE
as this functionality was not supported in it. A comparison of these systems on
the basis of the architecture, implementation model and several other features
is included.Comment: 19 pages, 10 figure
BonFIRE: A multi-cloud test facility for internet of services experimentation
BonFIRE offers a Future Internet, multi-site, cloud testbed, targeted at the Internet of Services community, that supports large scale testing of applications, services and systems over multiple, geographically distributed, heterogeneous cloud testbeds. The aim of BonFIRE is to provide an infrastructure that gives experimenters the ability to control and monitor the execution of their experiments to a degree that is not found in traditional cloud facilities. The BonFIRE architecture has been designed to support key functionalities such as: resource management; monitoring of virtual and physical infrastructure metrics; elasticity; single document experiment descriptions; and scheduling. As for January 2012 BonFIRE release 2 is operational, supporting seven pilot experiments. Future releases will enhance the offering, including the interconnecting with networking facilities to provide access to routers, switches and bandwidth-on-demand systems. BonFIRE will be open for general use late 2012
Platforms and Protocols for the Internet of Things
Building a general architecture for the Internet of Things (IoT) is a very complex task, exacerbated by the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we identify the main blocks of a generic IoT architecture, describing their features and requirements, and analyze the most common approaches proposed in the literature for each block. In particular, we compare three of the most important communication technologies for IoT purposes, i.e., REST, MQTT, and AMQP, and we also analyze three IoT platforms: openHAB, Sentilo, and Parse. The analysis will prove the importance of adopting an integrated approach that jointly addresses several issues and is able to flexibly accommodate the requirements of the various elements of the system. We also discuss a use case which illustrates the design challenges and the choices to make when selecting which protocols and technologies to use
A role-based software architecture to support mobile service computing in IoT scenarios
The interaction among components of an IoT-based system usually requires using low latency or real time for message delivery, depending on the application needs and the quality of the communication links among the components. Moreover, in some cases, this interaction should consider the use of communication links with poor or uncertain Quality of Service (QoS). Research efforts in communication support for IoT scenarios have overlooked the challenge of providing real-time interaction support in unstable links, making these systems use dedicated networks that are expensive and usually limited in terms of physical coverage and robustness. This paper presents an alternative to address such a communication challenge, through the use of a model that allows soft real-time interaction among components of an IoT-based system. The behavior of the proposed model was validated using state machine theory, opening an opportunity to explore a whole new branch of smart distributed solutions and to extend the state-of-the-art and the-state-of-the-practice in this particular IoT study scenario.Peer ReviewedPostprint (published version
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
Distributed Information Retrieval using Keyword Auctions
This report motivates the need for large-scale distributed approaches to information retrieval, and proposes solutions based on keyword auctions
- …