90,795 research outputs found
Recommended from our members
The distributed p-median problem in computer networks
The exponential growth of the Internet over the last decades has led to a significant evolution of the network services and applications. One of the challenges is to provide better
services scalability by placing service replica in appropriate network locations.
Finding the optimal solution to the facility location problem is particularly complex and
is not feasible for large scale systems. Locating facilities in near-optimal locations have been
extensively studied in many works and for different application domains. This work investigates one of the most notable problems in facility location, i.e. the p-median problem, which
locates p facilities with a minimum overall communication cost. All previous studies on the
p-median problem used a centralised approach to find the near-optimal solution. In this case
the required information needs to be collected in order to apply a sequential algorithm to find
a solution.
The centralised approach is infeasible in large-scale networks due to the time and space
complexity of the sequential algorithms as well as the large communication cost and latency
to aggregate the global information. Therefore, this work investigates the p-median problem
in a distributed environment.
To the best of the author’s knowledge, this is the first work to study the distributed pmedian problem for large-scale computer networks. Solving the p-median problem in a fully
distributed way is a challenging task due to the lack of global knowledge and of a centralised
coordinator.
Two new approaches for solving the p-median problem in a distributed environment are
proposed in this thesis. Both are designed to be executed without any centralised collection
of the data in a single node. These methods apply an iterative heuristic approach to improve
a random initial solution and to converge to a final solution with a local minimum of the cost.
The first approach builds a global view of the system and improves the current solution
by replacing a single facility at each iteration. The second approach, is designed according to the well-known k-medoids clustering
algorithm. At each iteration a local view of each cluster is generated and all facilities can be
updated to optimise the solution.
Both approaches were implemented within the Java-based PeerSim network simulator for
investigating the performance in large-scale systems and tested against different parameters
such as the size of networks, number of facilities to be placed and different initial solutions.
The results have shown that the first protocol is better at addressing locations for facilities
since it converges to a lower total cost of the solution than the second protocol. However, the
second one is faster in optimising the solution
Scalable Facility Location for Massive Graphs on Pregel-like Systems
We propose a new scalable algorithm for facility location. Facility location
is a classic problem, where the goal is to select a subset of facilities to
open, from a set of candidate facilities F , in order to serve a set of clients
C. The objective is to minimize the total cost of opening facilities plus the
cost of serving each client from the facility it is assigned to. In this work,
we are interested in the graph setting, where the cost of serving a client from
a facility is represented by the shortest-path distance on the graph. This
setting allows to model natural problems arising in the Web and in social media
applications. It also allows to leverage the inherent sparsity of such graphs,
as the input is much smaller than the full pairwise distances between all
vertices.
To obtain truly scalable performance, we design a parallel algorithm that
operates on clusters of shared-nothing machines. In particular, we target
modern Pregel-like architectures, and we implement our algorithm on Apache
Giraph. Our solution makes use of a recent result to build sketches for massive
graphs, and of a fast parallel algorithm to find maximal independent sets, as
building blocks. In so doing, we show how these problems can be solved on a
Pregel-like architecture, and we investigate the properties of these
algorithms. Extensive experimental results show that our algorithm scales
gracefully to graphs with billions of edges, while obtaining values of the
objective function that are competitive with a state-of-the-art sequential
algorithm
Optimistic Concurrency Control for Distributed Unsupervised Learning
Research on distributed machine learning algorithms has focused primarily on
one of two extremes - algorithms that obey strict concurrency constraints or
algorithms that obey few or no such constraints. We consider an intermediate
alternative in which algorithms optimistically assume that conflicts are
unlikely and if conflicts do arise a conflict-resolution protocol is invoked.
We view this "optimistic concurrency control" paradigm as particularly
appropriate for large-scale machine learning algorithms, particularly in the
unsupervised setting. We demonstrate our approach in three problem areas:
clustering, feature learning and online facility location. We evaluate our
methods via large-scale experiments in a cluster computing environment.Comment: 25 pages, 5 figure
On the use of biased-randomized algorithms for solving non-smooth optimization problems
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines
- …