100,138 research outputs found
Balancing Global Exploration and Local-connectivity Exploitation with Rapidly-exploring Random disjointed-Trees
Sampling efficiency in a highly constrained environment has long been a major
challenge for sampling-based planners. In this work, we propose
Rapidly-exploring Random disjointed-Trees* (RRdT*), an incremental optimal
multi-query planner. RRdT* uses multiple disjointed-trees to exploit
local-connectivity of spaces via Markov Chain random sampling, which utilises
neighbourhood information derived from previous successful and failed samples.
To balance local exploitation, RRdT* actively explore unseen global spaces when
local-connectivity exploitation is unsuccessful. The active trade-off between
local exploitation and global exploration is formulated as a multi-armed bandit
problem. We argue that the active balancing of global exploration and local
exploitation is the key to improving sample efficient in sampling-based motion
planners. We provide rigorous proofs of completeness and optimal convergence
for this novel approach. Furthermore, we demonstrate experimentally the
effectiveness of RRdT*'s locally exploring trees in granting improved
visibility for planning. Consequently, RRdT* outperforms existing
state-of-the-art incremental planners, especially in highly constrained
environments.Comment: Submitted to IEEE International Conference on Robotics and Automation
(ICRA) 201
Batch Informed Trees (BIT*): Sampling-based Optimal Planning via the Heuristically Guided Search of Implicit Random Geometric Graphs
In this paper, we present Batch Informed Trees (BIT*), a planning algorithm
based on unifying graph- and sampling-based planning techniques. By recognizing
that a set of samples describes an implicit random geometric graph (RGG), we
are able to combine the efficient ordered nature of graph-based techniques,
such as A*, with the anytime scalability of sampling-based algorithms, such as
Rapidly-exploring Random Trees (RRT).
BIT* uses a heuristic to efficiently search a series of increasingly dense
implicit RGGs while reusing previous information. It can be viewed as an
extension of incremental graph-search techniques, such as Lifelong Planning A*
(LPA*), to continuous problem domains as well as a generalization of existing
sampling-based optimal planners. It is shown that it is probabilistically
complete and asymptotically optimal.
We demonstrate the utility of BIT* on simulated random worlds in
and and manipulation problems on CMU's HERB, a
14-DOF two-armed robot. On these problems, BIT* finds better solutions faster
than RRT, RRT*, Informed RRT*, and Fast Marching Trees (FMT*) with faster
anytime convergence towards the optimum, especially in high dimensions.Comment: 8 Pages. 6 Figures. Video available at
http://www.youtube.com/watch?v=TQIoCC48gp
FedRR: a federated resource reservation algorithm for multimedia services
The Internet is rapidly evolving towards a multimedia service delivery platform. However, existing Internet-based content delivery approaches have several disadvantages, such as the lack of Quality of Service (QoS) guarantees. Future Internet research has presented several promising ideas to solve the issues related to the current Internet, such as federations across network domains and end-to-end QoS reservations. This paper presents an architecture for the delivery of multimedia content across the Internet, based on these novel principles. It facilitates the collaboration between the stakeholders involved in the content delivery process, allowing them to set up loosely-coupled federations. More specifically, the Federated Resource Reservation (FedRR) algorithm is proposed. It identifies suitable federation partners, selects end-to-end paths between content providers and their customers, and optimally configures intermediary network and infrastructure resources in order to satisfy the requested QoS requirements and minimize delivery costs
The Fast Heuristic Algorithms and Post-Processing Techniques to Design Large and Low-Cost Communication Networks
It is challenging to design large and low-cost communication networks. In
this paper, we formulate this challenge as the prize-collecting Steiner Tree
Problem (PCSTP). The objective is to minimize the costs of transmission routes
and the disconnected monetary or informational profits. Initially, we note that
the PCSTP is MAX SNP-hard. Then, we propose some post-processing techniques to
improve suboptimal solutions to PCSTP. Based on these techniques, we propose
two fast heuristic algorithms: the first one is a quasilinear time heuristic
algorithm that is faster and consumes less memory than other algorithms; and
the second one is an improvement of a stateof-the-art polynomial time heuristic
algorithm that can find high-quality solutions at a speed that is only inferior
to the first one. We demonstrate the competitiveness of our heuristic
algorithms by comparing them with the state-of-the-art ones on the largest
existing benchmark instances (169 800 vertices and 338 551 edges). Moreover, we
generate new instances that are even larger (1 000 000 vertices and 10 000 000
edges) to further demonstrate their advantages in large networks. The
state-ofthe-art algorithms are too slow to find high-quality solutions for
instances of this size, whereas our new heuristic algorithms can do this in
around 6 to 45s on a personal computer. Ultimately, we apply our
post-processing techniques to update the bestknown solution for a notoriously
difficult benchmark instance to show that they can improve near-optimal
solutions to PCSTP. In conclusion, we demonstrate the usefulness of our
heuristic algorithms and post-processing techniques for designing large and
low-cost communication networks
QuickXsort: Efficient Sorting with n log n - 1.399n +o(n) Comparisons on Average
In this paper we generalize the idea of QuickHeapsort leading to the notion
of QuickXsort. Given some external sorting algorithm X, QuickXsort yields an
internal sorting algorithm if X satisfies certain natural conditions.
With QuickWeakHeapsort and QuickMergesort we present two examples for the
QuickXsort-construction. Both are efficient algorithms that incur approximately
n log n - 1.26n +o(n) comparisons on the average. A worst case of n log n +
O(n) comparisons can be achieved without significantly affecting the average
case.
Furthermore, we describe an implementation of MergeInsertion for small n.
Taking MergeInsertion as a base case for QuickMergesort, we establish a
worst-case efficient sorting algorithm calling for n log n - 1.3999n + o(n)
comparisons on average. QuickMergesort with constant size base cases shows the
best performance on practical inputs: when sorting integers it is slower by
only 15% to STL-Introsort
- …