117,109 research outputs found
Selecting the best locations for data centers in resilient optical grid/cloud dimensioning
For optical grid/cloud scenarios, the dimensioning problem comprises not only deciding on the network dimensions (i.e., link bandwidths), but also choosing appropriate locations to install server infrastructure (i.e., data centers), as well as determining the amount of required server resources (for storage and/or processing). Given that users of such grid/cloud systems in general do not care about the exact physical locations of the server resources, a degree of freedom arises in choosing for each of their requests the most appropriate server location. We will exploit this anycast routing principle (i.e., source of traffic is given, but destination can be chosen rather freely) also to provide resilience: traffic may be relocated to alternate destinations in case of network/server failures. In this study, we propose to jointly optimize the link dimensioning and the location of the servers in an optical grid/cloud, where the anycast principle is applied for resiliency against either link or server node failures. While the data center location problem has some resemblance with either the classical p-center or k-means location problems, the anycast principle makes it much more difficult due to the requirement of link disjoint paths for ensuring grid resiliency
Location Privacy in Spatial Crowdsourcing
Spatial crowdsourcing (SC) is a new platform that engages individuals in
collecting and analyzing environmental, social and other spatiotemporal
information. With SC, requesters outsource their spatiotemporal tasks to a set
of workers, who will perform the tasks by physically traveling to the tasks'
locations. This chapter identifies privacy threats toward both workers and
requesters during the two main phases of spatial crowdsourcing, tasking and
reporting. Tasking is the process of identifying which tasks should be assigned
to which workers. This process is handled by a spatial crowdsourcing server
(SC-server). The latter phase is reporting, in which workers travel to the
tasks' locations, complete the tasks and upload their reports to the SC-server.
The challenge is to enable effective and efficient tasking as well as reporting
in SC without disclosing the actual locations of workers (at least until they
agree to perform a task) and the tasks themselves (at least to workers who are
not assigned to those tasks). This chapter aims to provide an overview of the
state-of-the-art in protecting users' location privacy in spatial
crowdsourcing. We provide a comparative study of a diverse set of solutions in
terms of task publishing modes (push vs. pull), problem focuses (tasking and
reporting), threats (server, requester and worker), and underlying technical
approaches (from pseudonymity, cloaking, and perturbation to exchange-based and
encryption-based techniques). The strengths and drawbacks of the techniques are
highlighted, leading to a discussion of open problems and future work
ENORM: A Framework For Edge NOde Resource Management
Current computing techniques using the cloud as a centralised server will
become untenable as billions of devices get connected to the Internet. This
raises the need for fog computing, which leverages computing at the edge of the
network on nodes, such as routers, base stations and switches, along with the
cloud. However, to realise fog computing the challenge of managing edge nodes
will need to be addressed. This paper is motivated to address the resource
management challenge. We develop the first framework to manage edge nodes,
namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for
provisioning and auto-scaling edge node resources are proposed. The feasibility
of the framework is demonstrated on a PokeMon Go-like online game use-case. The
benefits of using ENORM are observed by reduced application latency between 20%
- 80% and reduced data transfer and communication frequency between the edge
node and the cloud by up to 95\%. These results highlight the potential of fog
computing for improving the quality of service and experience.Comment: 14 pages; accepted to IEEE Transactions on Services Computing on 12
September 201
On the Benefit of Virtualization: Strategies for Flexible Server Allocation
Virtualization technology facilitates a dynamic, demand-driven allocation and
migration of servers. This paper studies how the flexibility offered by network
virtualization can be used to improve Quality-of-Service parameters such as
latency, while taking into account allocation costs. A generic use case is
considered where both the overall demand issued for a certain service (for
example, an SAP application in the cloud, or a gaming application) as well as
the origins of the requests change over time (e.g., due to time zone effects or
due to user mobility), and we present online and optimal offline strategies to
compute the number and location of the servers implementing this service. These
algorithms also allow us to study the fundamental benefits of dynamic resource
allocation compared to static systems. Our simulation results confirm our
expectations that the gain of flexible server allocation is particularly high
in scenarios with moderate dynamics
Joint dimensioning of server and network infrastructure for resilient optical grids/clouds
We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful
Finding Multiple New Optimal Locations in a Road Network
We study the problem of optimal location querying for location based services
in road networks, which aims to find locations for new servers or facilities.
The existing optimal solutions on this problem consider only the cases with one
new server. When two or more new servers are to be set up, the problem with
minmax cost criteria, MinMax, becomes NP-hard. In this work we identify some
useful properties about the potential locations for the new servers, from which
we derive a novel algorithm for MinMax, and show that it is efficient when the
number of new servers is small. When the number of new servers is large, we
propose an efficient 3-approximate algorithm. We verify with experiments on
real road networks that our solutions are effective and attains significantly
better result quality compared to the existing greedy algorithms
- …