41,979 research outputs found
A fast heuristic algorithm for the critical node problem
The critical node problem (CNP) aims to identify a subset of critical nodes in an undirected graph such that removing these critical nodes minimizes the pairwise node connectivity over the residual graph. CNP has various applications; however, it is computationally challenging. This paper introduces FastCNP, a fast heuristic algorithm for solving the problem. FastCNP employs an effective two-phase node exchange strategy to locate high-quality solutions and applies a destructive-constructive perturbation procedure to drive the search to new regions when the search stagnates. Computational results on 16 popular benchmark instances show that FastCNP finds improved best results (new upper bounds) for 6 instances, and matches the best-known results for 9 instances
Considerations about multistep community detection
The problem and implications of community detection in networks have raised a
huge attention, for its important applications in both natural and social
sciences. A number of algorithms has been developed to solve this problem,
addressing either speed optimization or the quality of the partitions
calculated. In this paper we propose a multi-step procedure bridging the
fastest, but less accurate algorithms (coarse clustering), with the slowest,
most effective ones (refinement). By adopting heuristic ranking of the nodes,
and classifying a fraction of them as `critical', a refinement step can be
restricted to this subset of the network, thus saving computational time.
Preliminary numerical results are discussed, showing improvement of the final
partition.Comment: 12 page
A More Reliable Greedy Heuristic for Maximum Matchings in Sparse Random Graphs
We propose a new greedy algorithm for the maximum cardinality matching
problem. We give experimental evidence that this algorithm is likely to find a
maximum matching in random graphs with constant expected degree c>0,
independent of the value of c. This is contrary to the behavior of commonly
used greedy matching heuristics which are known to have some range of c where
they probably fail to compute a maximum matching
Static and Dynamic Path Planning Using Incremental Heuristic Search
Path planning is an important component in any highly automated vehicle
system. In this report, the general problem of path planning is considered
first in partially known static environments where only static obstacles are
present but the layout of the environment is changing as the agent acquires new
information. Attention is then given to the problem of path planning in dynamic
environments where there are moving obstacles in addition to the static ones.
Specifically, a 2D car-like agent traversing in a 2D environment was
considered. It was found that the traditional configuration-time space approach
is unsuitable for producing trajectories consistent with the dynamic
constraints of a car. A novel scheme is then suggested where the state space is
4D consisting of position, speed and time but the search is done in the 3D
space composed by position and speed. Simulation tests shows that the new
scheme is capable of efficiently producing trajectories respecting the dynamic
constraint of a car-like agent with a bound on their optimality.Comment: Internship Repor
Learning Scheduling Algorithms for Data Processing Clusters
Efficiently scheduling data processing jobs on distributed compute clusters
requires complex algorithms. Current systems, however, use simple generalized
heuristics and ignore workload characteristics, since developing and tuning a
scheduling policy for each workload is infeasible. In this paper, we show that
modern machine learning techniques can generate highly-efficient policies
automatically. Decima uses reinforcement learning (RL) and neural networks to
learn workload-specific scheduling algorithms without any human instruction
beyond a high-level objective such as minimizing average job completion time.
Off-the-shelf RL techniques, however, cannot handle the complexity and scale of
the scheduling problem. To build Decima, we had to develop new representations
for jobs' dependency graphs, design scalable RL models, and invent RL training
methods for dealing with continuous stochastic job arrivals. Our prototype
integration with Spark on a 25-node cluster shows that Decima improves the
average job completion time over hand-tuned scheduling heuristics by at least
21%, achieving up to 2x improvement during periods of high cluster load
- …