3,504 research outputs found

    A Hybrid multi-agent architecture and heuristics generation for solving meeting scheduling problem

    Get PDF
    Agent-based computing has attracted much attention as a promising technique for application domains that are distributed, complex and heterogeneous. Current research on multi-agent systems (MAS) has become mature enough to be applied as a technology for solving problems in an increasingly wide range of complex applications. The main formal architectures used to describe the relationships between agents in MAS are centralised and distributed architectures. In computational complexity theory, researchers have classified the problems into the followings categories: (i) P problems, (ii) NP problems, (iii) NP-complete problems, and (iv) NP-hard problems. A method for computing the solution to NP-hard problems, using the algorithms and computational power available nowadays in reasonable time frame remains undiscovered. And unfortunately, many practical problems belong to this very class. On the other hand, it is essential that these problems are solved, and the only possibility of doing this is to use approximation techniques. Heuristic solution techniques are an alternative. A heuristic is a strategy that is powerful in general, but not absolutely guaranteed to provide the best (i.e. optimal) solutions or even find a solution. This demands adopting some optimisation techniques such as Evolutionary Algorithms (EA). This research has been undertaken to investigate the feasibility of running computationally intensive algorithms on multi-agent architectures while preserving the ability of small agents to run on small devices, including mobile devices. To achieve this, the present work proposes a new Hybrid Multi-Agent Architecture (HMAA) that generates new heuristics for solving NP-hard problems. This architecture is hybrid because it is "semi-distributed/semi-centralised" architecture where variables and constraints are distributed among small agents exactly as in distributed architectures, but when the small agents become stuck, a centralised control becomes active where the variables are transferred to a super agent, that has a central view of the whole system, and possesses much more computational power and intensive algorithms to generate new heuristics for the small agents, which find optimal solution for the specified problem. This research comes up with the followings: (1) Hybrid Multi-Agent Architecture (HMAA) that generates new heuristic for solving many NP-hard problems. (2) Two frameworks of HMAA have been implemented; search and optimisation frameworks. (3) New SMA meeting scheduling heuristic. (4) New SMA repair strategy for the scheduling process. (5) Small Agent (SMA) that is responsible for meeting scheduling has been developed. (6) “Local Search Programming” (LSP), a new concept for evolutionary approaches, has been introduced. (7) Two types of super-agent (LGP_SUA and LSP_SUA) have been implemented in the HMAA, and two SUAs (local and global optima) have been implemented for each type. (8) A prototype for HMAA has been implemented: this prototype employs the proposed meeting scheduling heuristic with the repair strategy on SMAs, and the four extensive algorithms on SUAs. The results reveal that this architecture is applicable to many different application domains because of its simplicity and efficiency. Its performance was better than many existing meeting scheduling architectures. HMAA can be modified and altered to other types of evolutionary approaches

    Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments

    Get PDF
    © Peter C. R. Lane, Fernand Gobet. This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY-NC 3.0)Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the ‘speciated non-dominated sorting genetic algorithm’ for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.Peer reviewedFinal Published versio

    State-of-the-art in aerodynamic shape optimisation methods

    Get PDF
    Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners

    Holistic, data-driven, service and supply chain optimisation: linked optimisation.

    Get PDF
    The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business objectives. However, managing the decision-making process (like scheduling, ordering, delivering and allocating) at the various business components and maintaining a holistic objective is a huge business challenge, as these operations are complex and dynamic. This is because the overall chain of business processes is widely distributed across all the supply chain participants; therefore, no individual collaborator has a complete overview of the processes. Increasingly, such decisions are automated and are strongly supported by optimisation algorithms - manufacturing optimisation, B2B ordering, financial trading, transportation scheduling and allocation. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems like supply chains. It is well-known that decisions made at one point in supply chains can have significant consequences that ripple through linked production and transportation systems. Recently, global shocks to supply chains (COVID-19, climate change, blockage of the Suez Canal) have demonstrated the importance of these interdependencies, and the need to create supply chains that are more resilient and have significantly reduced impact on the environment. Such interacting decision-making systems need to be considered through an optimisation process. However, the interactions between such decision-making systems are not modelled. We therefore believe that modelling such interactions is an opportunity to provide computational extensions to current optimisation paradigms. This research study aims to develop a general framework for formulating and solving holistic, data-driven optimisation problems in service and supply chains. This research achieved this aim and contributes to scholarship by firstly considering the complexities of supply chain problems from a linked problem perspective. This leads to developing a formalism for characterising linked optimisation problems as a model for supply chains. Secondly, the research adopts a method for creating a linked optimisation problem benchmark by linking existing classical benchmark sets. This involves using a mix of classical optimisation problems, typically relating to supply chain decision problems, to describe different modes of linkages in linked optimisation problems. Thirdly, several techniques for linking supply chain fragmented data have been proposed in the literature to identify data relationships. Therefore, this thesis explores some of these techniques and combines them in specific ways to improve the data discovery process. Lastly, many state-of-the-art algorithms have been explored in the literature and these algorithms have been used to tackle problems relating to supply chain problems. This research therefore investigates the resilient state-of-the-art optimisation algorithms presented in the literature, and then designs suitable algorithmic approaches inspired by the existing algorithms and the nature of problem linkages to address different problem linkages in supply chains. Considering research findings and future perspectives, the study demonstrates the suitability of algorithms to different linked structures involving two sub-problems, which suggests further investigations on issues like the suitability of algorithms on more complex structures, benchmark methodologies, holistic goals and evaluation, processmining, game theory and dependency analysis

    Multi-objective Decentralised Coordination for Teams of Robotic Agents

    No full text
    This thesis introduces two novel coordination mechanisms for a team of multiple autonomous decision makers, represented as autonomous robotic agents. Such techniques aim to improve the capabilities of robotic agents, such as unmanned aerial or ground vehicles (UAVs and UGVs), when deployed in real world operations. In particular, the work reported in this thesis focuses on improving the decision making of teams of such robotic agents when deployed in an unknown, and dynamically changing, environment to perform search and rescue operations for lost targets. This problem is well known and studied within both academia and industry and coordination mechanisms for controlling such teams have been studied in both the robotics and the multi-agent systems communities. Within this setting, our first contribution aims at solves a canonical target search problem, in which a team of UAVs is deployed in an environment to search for a lost target. Specifically, we present a novel decentralised coordination approach for teams of UAVs, based on the max-sum algorithm. In more detail, we represent each agent as a UAV, and study the applicability of the max-sum algorithm, a decentralised approximate message passing algorithm, to coordinate a team of multiple UAVs for target search. We benchmark our approach against three state-of-the-art approaches within a simulation environment. The results show that coordination with the max-sum algorithm out-performs a best response algorithm, which represents the state of the art in the coordination of UAVs for search, by up to 26%, an implicitly coordinated approach, where the coordination arises from the agents making decisions based on a common belief, by up to 34% and finally a non-coordinated approach by up to 68%. These results indicate that the max-sum algorithm has the potential to be applied in complex systems operating in dynamic environments. We then move on to tackle coordination in which the team has more than one objective to achieve (e.g. maximise the covered space of the search area, whilst minimising the amount of energy consumed by each UAV). To achieve this shortcoming, we present, as our second contribution, an extension of the max-sum algorithm to compute bounded solutions for problems involving multiple objectives. More precisely, we develop the bounded multi-objective max-sum algorithm (B-MOMS), a novel decentralised coordination algorithm able to solve problems involving multiple objectives while providing guarantees on the solution it recovers. B-MOMS extends the standard max-sum algorithm to compute bounded approximate solutions to multi-objective decentralised constraint optimisation problems (MO-DCOPs). Moreover, we prove the optimality of B-MOMS in acyclic constraint graphs, and derive problem dependent bounds on its approximation ratio when these graphs contain cycles. Finally, we empirically evaluate its performance on a multi-objective extension of the canonical graph colouring problem. In so doing, we demonstrate that, for the settings we consider, the approximation ratio never exceeds 22, and is typically less than 1.51.5 for less-constrained graphs. Moreover, the runtime required by B-MOMS on the problem instances we considered never exceeds 3030 minutes, even for maximally constrained graphs with one hundred agents

    A manufacturing system energy-efficient optimisation model for maintenance production workforce size determination using integrated fuzzy logic and quality function deployment approach

    Get PDF
    In maintenance systems, the current approach to workforce analysis entails the utilisation of metrics that focus exclusively on workforce cost and productivity. This method omits the “green” concept, which principally hinges on energy-efficient manufacturing and also ignores the production-maintenance integration. The approach is not accurate and could not be heavily relied upon for sound maintenance decisions. Consequently, a comprehensive, scientifically-motivated, cost-effective and an environmentally-conscious approach are needed. With this in view, a deviation from the traditional approach through employing a combined fuzzy, quality function deployment interacting with three meta-heuristics (colliding bodies optimisation, big-bang big-crunch and particle swarm optimisation) for optimisation is made in the current study. The workforce size parameters are determined by maximising workforce size’s earned-valued as well as electric power efficiency maximisation subject to various real-life constraints. The efficacy and robustness of the model is tested with data from an aluminium products manufacturing system operating in a developing country. The results obtained indicate that the proposed colliding bodies’ optimisation framework is effective in comparison with other techniques. This implies that the proposed methodology potentially displays tremendous benefit of conserving energy, thus aiding environmental preservation and cost of energy savings. The principal novelty of the paper is the uniquely new method of quantifying the energy savings contributions of the maintenance workforc

    Integrating continuous differential evolution with discrete local search for meander line RFID antenna design

    Get PDF
    The automated design of meander line RFID antennas is a discrete self-avoiding walk(SAW) problem for which efficiency is to be maximized while resonant frequency is to beminimized. This work presents a novel exploration of how discrete local search may beincorporated into a continuous solver such as differential evolution (DE). A prior DE algorithmfor this problem that incorporates an adaptive solution encoding and a bias favoringantennas with low resonant frequency is extended by the addition of the backbite localsearch operator and a variety of schemes for reintroducing modified designs into the DEpopulation. The algorithm is extremely competitive with an existing ACO approach and thetechnique is transferable to other SAW problems and other continuous solvers. The findingsindicate that careful reintegration of discrete local search results into the continuous populationis necessary for effective performance

    Search with evolutionary ruin and stochastic rebuild: a theoretic framework and a case study on exam timetabling

    Get PDF
    This paper presents a state transition based formal framework for a new search method, called Evolutionary Ruin and Stochastic Recreate, which tries to learn and adapt to the changing environments during the search process. It improves the performance of the original Ruin and Recreate principle by embedding an additional phase of Evolutionary Ruin to mimic the survival-of-the-fittest mechanism within single solutions. This method executes a cycle of Solution Decomposition, Evolutionary Ruin, Stochastic Recreate and Solution Acceptance until a certain stopping condition is met. The Solution Decomposition phase first uses some problem-specific knowledge to decompose a complete solution into its components and assigns a score to each component. The Evolutionary Ruin phase then employs two evolutionary operators (namely Selection and Mutation) to destroy a certain fraction of the solution, and the next Stochastic Recreate phase repairs the “broken” solution. Last, the Solution Acceptance phase selects a specific strategy to determine the probability of accepting the newly generated solution. Hence, optimisation is achieved by an iterative process of component evaluation, solution disruption and stochastic constructive repair. From the state transitions point of view, this paper presents a probabilistic model and implements a Markov chain analysis on some theoretical properties of the approach. Unlike the theoretical work on genetic algorithm and simulated annealing which are based on state transitions within the space of complete assignments, our model is based on state transitions within the space of partial assignments. The exam timetabling problems are used to test the performance in solving real-world hard problems
    corecore