1,066 research outputs found
An improved constraint satisfaction adaptive neural network for job-shop scheduling
Copyright @ Springer Science + Business Media, LLC 2009This paper presents an improved constraint satisfaction adaptive neural network for job-shop scheduling problems. The neural network is constructed based on the constraint conditions of a job-shop scheduling problem. Its structure and neuron connections can change adaptively according to the real-time constraint satisfaction situations that arise during the solving process. Several heuristics are also integrated within the neural network to enhance its convergence, accelerate its convergence, and improve the quality of the solutions produced. An experimental study based on a set of benchmark job-shop scheduling problems shows that the improved constraint satisfaction adaptive neural network outperforms the original constraint satisfaction adaptive neural network in terms of computational time and the quality of schedules it produces. The neural network approach is also experimentally validated to outperform three classical heuristic algorithms that are widely used as the basis of many state-of-the-art scheduling systems. Hence, it may also be used to construct advanced job-shop scheduling systems.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and in part by the National Nature Science Fundation of China under Grant 60821063 and National Basic Research Program of China under Grant 2009CB320601
EECBS: A Bounded-Suboptimal Search for Multi-Agent Path Finding
Multi-Agent Path Finding (MAPF), i.e., finding collision-free paths for
multiple robots, is important for many applications where small runtimes are
necessary, including the kind of automated warehouses operated by Amazon. CBS
is a leading two-level search algorithm for solving MAPF optimally. ECBS is a
bounded-suboptimal variant of CBS that uses focal search to speed up CBS by
sacrificing optimality and instead guaranteeing that the costs of its solutions
are within a given factor of optimal. In this paper, we study how to decrease
its runtime even further using inadmissible heuristics. Motivated by Explicit
Estimation Search (EES), we propose Explicit Estimation CBS (EECBS), a new
bounded-suboptimal variant of CBS, that uses online learning to obtain
inadmissible estimates of the cost of the solution of each high-level node and
uses EES to choose which high-level node to expand next. We also investigate
recent improvements of CBS and adapt them to EECBS. We find that EECBS with the
improvements runs significantly faster than the state-of-the-art
bounded-suboptimal MAPF algorithms ECBS, BCP-7, and eMDD-SAT on a variety of
MAPF instances. We hope that the scalability of EECBS enables additional
applications for bounded-suboptimal MAPF algorithms.Comment: Published at AAAI 202
Heuristic search under time and cost bounds
Intelligence is difficult to formally define, but one of its hallmarks is the ability find a solution to a novel problem. Therefore it makes good sense that heuristic search is a foundational topic in artificial intelligence. In this context search refers to the process of finding a solution to the problem by considering a large, possibly infinite, set of potential plans of action. Heuristic refers to a rule of thumb or a guiding, if not always accurate, principle. Heuristic search describes a family of techniques which consider members of the set of potential plans of action in turn, as determined by the heuristic, until a suitable solution to the problem is discovered.
This work is concerned primarily with suboptimal heuristic search algorithms. These algorithms are not inherently flawed, but they are suboptimal in the sense that the plans that they return may be more expensive than a least cost, or optimal, plan for the problem. While suboptimal heuristic search algorithms may not return least cost solutions to the problem, they are often far faster than their optimal counterparts, making them more attractive for many applications.
The thesis of this dissertation is that the performance of suboptimal search algorithms can be improved by taking advantage of information that, while widely available, has been overlooked. In particular, we will see how estimates of the length of a plan, estimates of plan cost that do not err on the side of caution, and measurements of the accuracy of our estimators can be used to improve the performance of suboptimal heuristic search algorithms
Job-shop scheduling with an adaptive neural network and local search hybrid approach
This article is posted here with permission from IEEE - Copyright @ 2006 IEEEJob-shop scheduling is one of the most difficult production scheduling problems in industry. This paper proposes an adaptive neural network and local search hybrid approach for the job-shop scheduling problem. The adaptive neural network is constructed based on constraint satisfactions of job-shop scheduling and can adapt its structure and neuron connections during the solving process. The neural network is used to solve feasible schedules for the job-shop scheduling problem while the local search scheme aims to improve the performance by searching the neighbourhood of a given feasible schedule. The experimental study validates the proposed hybrid approach for job-shop scheduling regarding the quality of solutions and the computing speed
Informed RRT*: Optimal Sampling-based Path Planning Focused via Direct Sampling of an Admissible Ellipsoidal Heuristic
Rapidly-exploring random trees (RRTs) are popular in motion planning because
they find solutions efficiently to single-query problems. Optimal RRTs (RRT*s)
extend RRTs to the problem of finding the optimal solution, but in doing so
asymptotically find the optimal path from the initial state to every state in
the planning domain. This behaviour is not only inefficient but also
inconsistent with their single-query nature.
For problems seeking to minimize path length, the subset of states that can
improve a solution can be described by a prolate hyperspheroid. We show that
unless this subset is sampled directly, the probability of improving a solution
becomes arbitrarily small in large worlds or high state dimensions. In this
paper, we present an exact method to focus the search by directly sampling this
subset.
The advantages of the presented sampling technique are demonstrated with a
new algorithm, Informed RRT*. This method retains the same probabilistic
guarantees on completeness and optimality as RRT* while improving the
convergence rate and final solution quality. We present the algorithm as a
simple modification to RRT* that could be further extended by more advanced
path-planning algorithms. We show experimentally that it outperforms RRT* in
rate of convergence, final solution cost, and ability to find difficult
passages while demonstrating less dependence on the state dimension and range
of the planning problem.Comment: 8 pages, 11 figures. Videos available at
https://www.youtube.com/watch?v=d7dX5MvDYTc and
https://www.youtube.com/watch?v=nsl-5MZfwu
- …