7 research outputs found

    Hybrid artificial bee colony and flower pollination algorithm for grid-based optimal pathfinding

    Get PDF
    Pathfinding is essential and necessary for agent movement used in computer games and many other applications. Generally, the pathfinding algorithm searches the feasible shortest path from start to end locations. This task is computationally expensive and consumes large memory, particularly in a large map size. Obstacle avoidance in the game environment increases the complexity to find a new path in the search space. A huge number of algorithms, including heuristic and metaheuristics approaches, have been proposed to overcome the pathfinding problem. Artificial Bee Colony (ABC) is a metaheuristic algorithm that is robust, has fast convergence, high flexibility, and fewer control parameters. However, the best solution founded by the onlooker bee in the presence of constraints is still insufficient and not always satisfactory. A number of variant ABC algorithms have been proposed to achieve the optimal solution. However, it is difficult to simultaneously achieve the optimal solution. Alternatively, Flower Pollination Algorithm (FPA) is one of promising algorithms in optimising problems. The algorithm is easier to implement and faster to reach an optimum solution. Thus, this research proposed Artificial Bee Colony – Flower Pollination Algorithm to solve the pathfinding problem in games, in terms of path cost, computing time, and memory. The result showed that ABC-FPA improved the path cost result by 81.68% and reduced time by 97.84% as compared to the ABC algorithm, which led to a better pathfinding result. This performance indicated that ABC-FPA pathfinding gave better quality pathfinding results

    Reducing reexpansions in iterative-deepening search by controlling cutoff bounds

    No full text
    It is known that a best-first search algorithm like A* [5, 6] requires too much space (which often renders it unusable) and a depth-first search strategy does not guarantee an optimal cost solution. The iterative-deepening algorithm IDA* [4] achieves both space and cost optimality for a class of tree searching problems. However, for many other problems, it takes too much of computation time due to excessive reexpansion of nodes. This paper presents a modification of IDA* to an admissible iterative depth-first branch and bound algorithm IDA*_CR for trees which overcomes this drawback of IDA* and operates much faster using the same amount of storage. Algorithm IDA*_CRA, a bounded suboptimal cost variation of IDA*_CR is also presented in order to reduce the execution time still further. Results with the 0/1 Knapsack Problem, Traveling Salesman Problem, and the Flow Shop Scheduling Problem are shown

    Elasticity and resource aware scheduling in distributed data stream processing systems

    Get PDF
    The era of big data has led to the emergence of new systems for real-time distributed stream processing, e.g., Apache Storm is one of the most popular stream processing systems in industry today. However, Storm, like many other stream processing systems, lacks many important and desired features. One important feature is elasticity with clusters running Storm, i.e. change the cluster size on demand. Since the current Storm scheduler uses a naïve round robin approach in scheduling applications, another important feature is for Storm to have an intelligent scheduler that efficiently uses the underlying hardware by taking into account resource demand and resource availability when performing a scheduling. Both are important features that can make Storm a more robust and efficient system. Even though our target system is Storm, the techniques we have developed can be used in other similar stream processing systems. We have created a system called Stela that we implemented in Storm, which can perform on-demand scale-out and scale-in operations in distributed processing systems. Stela is minimally intrusive and disruptive for running jobs. Stela maximizes performance improvement for scale-out operations and minimally decrease performance for scale-in operations while not changing existing scheduling of jobs. Stela was developed in partnership with another Master’s Student, Le Xu [1]. We have created a system called R-Storm that does intelligent resource aware scheduling within Storm. The default round-robin scheduling mechanism currently deployed in Storm disregards resource demands and availability, and can therefore be very inefficient at times. R-Storm is designed to maximize resource utilization while minimizing network latency. When scheduling tasks, R-Storm can satisfy both soft and hard resource constraints as well as minimizing network distance between components that communicate with each other. The problem of mapping tasks to machines can be reduced to Quadratic Multiple 3-Dimensional Knapsack Problem, which is an NP-hard problem. However, our proposed scheduling algorithm within R-Storm attempts to bypass the limitation associated with NP-hard class of problems. We evaluate the performance of both Stela and R-Storm through our implementations of them in Storm by using several micro-benchmark Storm topologies and Storm topologies in use by Yahoo! In. Our experiments show that compared to Apache Storm’s default scheduler, Stela’s scale-out operation reduces interruption time to as low as 12.5% and achieves throughput that is 45-120% higher than Storm’s. And for scale-in operations, Stela achieves almost zero throughput post scale reduction while two other groups experience 200% and 50% throughput decrease respectively. For R-Storm, we observed that schedulings of topologies done by R-Storm perform on average 50%-100% better than that done by Storm’s default scheduler

    Planning under time pressure

    Get PDF
    Heuristic search is a technique used pervasively in artificial intelligence and automated planning. Often an agent is given a task that it would like to solve as quickly as possible. It must allocate its time between planning the actions to achieve the task and actually executing them. We call this problem planning under time pressure. Most popular heuristic search algorithms are ill-suited for this setting, as they either search a lot to find short plans or search a little and find long plans. The thesis of this dissertation is: when under time pressure, an automated agent should explicitly attempt to minimize the sum of planning and execution times, not just one or just the other. This dissertation makes four contributions. First we present new algorithms that use modern multi-core CPUs to decrease planning time without increasing execution. Second, we introduce a new model for predicting the performance of iterative-deepening search. The model is as accurate as previous offline techniques when using less training data, but can also be used online to reduce the overhead of iterative-deepening search, resulting in faster planning. Third we show offline planning algorithms that directly attempt to minimize the sum of planning and execution times. And, fourth we consider algorithms that plan online in parallel with execution. Both offline and online algorithms account for a user-specified preference between search and execution, and can greatly outperform the standard utility-oblivious techniques. By addressing the problem of planning under time pressure, these contributions demonstrate that heuristic search is no longer restricted to optimizing solution cost, obviating the need to choose between slow search times and expensive solutions

    Scheduling of flexible manufacturing systems integrating petri nets and artificial intelligence methods.

    Get PDF
    The work undertaken in this thesis is about the integration of two well-known methodologies: Petri net (PN) model Ii ng/analysis of industrial production processes and Artificial Intelligence (AI) optimisation search techniques. The objective of this integration is to demonstrate its potential in solving a difficult and widely studied problem, the scheduling of Flexible Manufacturing Systems (FIVIS). This work builds on existing results that clearly show the convenience of PNs as a modelling tool for FIVIS. It addresses the problem of the integration of PN and Al based search methods. Whilst this is recognised as a potentially important approach to the scheduling of FIVIS there is a lack of any clear evidence that practical systems might be built. This thesis presents a novel scheduling methodology that takes forward the current state of the art in the area by: Firstly presenting a novel modelling procedure based on a new class of PN (cb-NETS) and a language to define the essential features of basic FIVIS, demonstrating that the inclusion of high level FIVIS constraints is straight forward. Secondly, we demonstrate that PN analysis is useful in reducing search complexity and presents two main results: a novel heuristic function based on PN analysis that is more efficient than existing methods and a novel reachability scheme that avoids futile exploration of candidate schedules. Thirdly a novel scheduling algorithm that overcomes the efficiency drawbacks of previous algorithms is presented. This algorithm satisfactorily overcomes the complexity issue while achieving very promising results in terms of optimality. Finally, this thesis presents a novel hybrid scheduler that demonstrates the convenience of the use of PN as a representation paradigm to support hybridisation between traditional OR methods, Al systematic search and stochastic optimisation algorithms. Initial results show that the approach is promising
    corecore