760 research outputs found
Numerical Simulation and Assessment of Meta Heuristic Optimization Based Multi Objective Dynamic Job Shop Scheduling System
In today's world of manufacturing, cost reduction becomes one of the most important issues. A successful business needs to reduce its cost to be competitive. The programming of the machine is playing an important role in production planning and control as a tool to help manufacturers reduce their costs maximizing the use of their resources. The programming problem is not only limited to the programming of the machine, but also covers many other areas such such as computer and information technology and communication. From the definition, programming is an art that involves allocating, planning the allocation and utilization of resources to achieve a goal. The aim of the program is complete tasks in a reasonable amount of time. This reasonableness is a performance measure of how well the resources are allocated to tasks. Time or time-dependent functions are always it used as performance measures. The objectives of this research are to develop Intelligent Search Heuristic algorithms (ISHA) for equal and variable size sub lot for m machine flow shop problems, to Implement Particle Swarm Optimization algorithm (PSO) in matlab, to develop PSO based Optimization program for efficient job shop scheduling problem. The work also address solution to observe and verify results of PSO based Job Shop Scheduling with help of graft chart
Direct transcription of low-thrust trajectories with finite trajectory elements
This paper presents a novel approach to the design of Low-Thrust trajectories, based on a first order approximated analytical solution of Gauss planetary equations. This analytical solution is shown to have a better accuracy than a second-order explicit numerical integrator and at a lower computational cost. Hence, it can be employed for the fast propagation of perturbed Keplerian motion when moderate accuracy is required. The analytical solution was integrated in a direct transcription method based on a decomposition of the trajectory into direct finite perturbative elements (DFPET). DFPET were applied to the solution of two-point boundary transfer problems. Furthermore the paper presents an example of the use of DFPET for the solution of a multiobjective trajectory optimisation problem in which both the total ∆V and transfer time are minimized with respect to departure and arrival dates. Two transfer problems were used as test cases: a direct transfer from Earth to Mars and a spiral from a low Earth orbit to the International Space Station
Intervention in the social population space of Cultural Algorithm
Cultural Algorithms (CA) offers a better way to simulate social and culture driven agents by introducing the notion of culture into the artificial population. When it comes to mimic intelligent social beings such as humans, the search for a better fit or global optima becomes multi dimensional because of the complexity produced by the relevant system parameters and intricate social behaviour. In this research an extended CA framework has been presented. The architecture provides extensions to the basic CA framework. The major extensions include the mechanism of influencing selected individuals into the population space by means of existing social network and consequently alter the cultural belief favourably. Another extension of the framework was done in the population space by introducing the concept of social network. The agents in the population are put into one (or more) network through which they can communicate and propagate knowledge. Identification and exploitation of such network is necessary sinceit may lead to a quicker shift of the cultural norm
An analysis of indirect optimisation strategies for scheduling.
By incorporating domain knowledge, simple greedy procedures can be defined to generate reasonably good solutions to many optimisation problems. However, such solutions are unlikely to be optimal and their quality often depends on the way the decision variables are input to the greedy method. Indirect optimisation uses meta-heuristics to optimise the input of the greedy decoders. As the performance and the runtime differ across greedy methods and meta-heuristics, deciding how to split the computational effort between the two sides of the optimisation is not trivial and can significantly impact the search. In this paper, an artificial scheduling problem is presented along with five greedy procedures, using varying levels of domain information. A methodology to compare different indirect optimisation strategies is presented using a simple Hill Climber, a Genetic Algorithm and a population-based Local Search. By assessing all combinations of meta-heuristics and greedy procedures on a range of problem instances with different properties, experiments show that encapsulating problem knowledge within greedy decoders may not always prove successful and that simpler methods can lead to comparable results as advanced ones when combined with meta-heuristics that are adapted to the problem. However, the use of efficient greedy procedures reduces the relative difference between meta-heuristics
Probabilistic modeling and reasoning in multiagent decision systems
Ph.DDOCTOR OF PHILOSOPH
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
A new and efficient intelligent collaboration scheme for fashion design
Technology-mediated collaboration process has been extensively studied for over a decade. Most applications with collaboration concepts reported in the literature focus on enhancing efficiency and effectiveness of the decision-making processes in objective and well-structured workflows. However, relatively few previous studies have investigated the applications of collaboration schemes to problems with subjective and unstructured nature. In this paper, we explore a new intelligent collaboration scheme for fashion design which, by nature, relies heavily on human judgment and creativity. Techniques such as multicriteria decision making, fuzzy logic, and artificial neural network (ANN) models are employed. Industrial data sets are used for the analysis. Our experimental results suggest that the proposed scheme exhibits significant improvement over the traditional method in terms of the time–cost effectiveness, and a company interview with design professionals has confirmed its effectiveness and significance
Computational Theory of Mind for Human-Agent Coordination
In everyday life, people often depend on their theory of mind, i.e., their ability to reason about unobservable mental content of others to understand, explain, and predict their behaviour. Many agent-based models have been designed to develop computational theory of mind and analyze its effectiveness in various tasks and settings. However, most existing models are not generic (e.g., only applied in a given setting), not feasible (e.g., require too much information to be processed), or not human-inspired (e.g., do not capture the behavioral heuristics of humans). This hinders their applicability in many settings. Accordingly, we propose a new computational theory of mind, which captures the human decision heuristics of reasoning by abstracting individual beliefs about others. We specifically study computational affinity and show how it can be used in tandem with theory of mind reasoning when designing agent models for human-agent negotiation. We perform two-agent simulations to analyze the role of affinity in getting to agreements when there is a bound on the time to be spent for negotiating. Our results suggest that modeling affinity can ease the negotiation process by decreasing the number of rounds needed for an agreement as well as yield a higher benefit for agents with theory of mind reasoning.</p
A new ant colony optimization model for complex graph-based problems
Tesis doctoral inédita leída en la Universidad Autónoma de Madrid. Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura: julio de 2014Nowadays, there is a huge number of problems that due to their complexity have
employed heuristic-based algorithms to search for near-to-optimal (or even optimal)
solutions. These problems are usually NP-complete, so classical algorithms are not
the best candidates to address these problems because they need a large amount of
computational resources, or they simply cannot find any solution when the problem
grows. Some classical examples of these kind of problems are the Travelling Salesman
Problem (TSP) or the N-Queens problem. It is also possible to find examples in real and
industrial domains related to the optimization of complex problems, like planning,
scheduling, Vehicle Routing Problems (VRP), WiFi network Design Problem (WiFiDP)
or behavioural pattern identification, among others.
Regarding to heuristic-based algorithms, two well-known paradigms are Swarm
Intelligence and Evolutionary Computation. Both paradigms belongs to a subfield
from Artificial Intelligence, named Computational Intelligence that also contains
Fuzzy Systems, Artificial Neural Networks and Artificial Immune Systems areas.
Swarm Intelligence (SI) algorithms are focused on the collective behaviour of selforganizing
systems. These algorithms are characterized by the generation of collective
intelligence from non-complex individual behaviour and the communication schemes
amongst them. Some examples of SI algorithms are particle swarm optimization, ant
colony optimization (ACO), bee colony optimization o bird flocking.
Ant Colony Optimization (ACO) are based on the foraging behaviour of these insects.
In these kind of algorithms, the ants take different decisions during their execution
that allows them to build their own solution to the problem. Once any ant has
finished its execution, the ant goes back through the followed path and it deposits,
in the environment, pheromones that contains information about the built solution.
These pheromones will influence the decision of future ants, so there is an indirect
communication through the environment called stigmergy.
When an ACO algorithm is applied to any of the optimization problems just described,
the problem is usually modelled into a graph. Nevertheless, the classical graph-based
representation is not the best one for the execution of ACO algorithms because it
presents some important pitfalls. The first one is related to the polynomial, or even
exponential, growth of the resulting graph. The second pitfall is related to those
problems that needs from real variables because these problems cannot be modelled
using the classical graph-based representation.
On the other hand, Evolutionary Computation (EC) are a set of population-based
algorithms based in the Darwinian evolutionary process. In this kind of algorithms
there is one (or more) population composed by different individuals that represent a
possible solution to the problem. For each iteration, the population evolves by the use
of evolutionary procedures which means that better individuals (i.e. better solutions)
are generated along the execution of the algorithm. Both kind of algorithms, EC
and SI, have been traditionally applied in previous NP-hard problems. Different
population-based strategies have been developed, compared and even combined to
design hybrid algorithms.
This thesis has been focused on the analysis of classical graph-based representations
and its application in ACO algorithms into complex problems, and the development of
a new ACO model that tries to take a step forward in this kind of algorithms. In this
new model, the problem is represented using a reduced graph that affects to the ants
behaviour, which becomes more complex. Also, this size reduction generates a fast
growth in the number of pheromones created. For this reason, a new metaheuristic
(called Oblivion Rate) has been designed to control the number of pheromones stored
in the graph.
In this thesis different metaheuristics have been designed for the proposed system
and their performance have been compared. One of these metaheuristics is the
Oblivion Rate, based on an exponential function that takes into account the number
of pheromones created in the system. Other Oblivion Rate function is based on a bioinspired
swarm algorithm that uses some concepts extracted from the evolutionary
algorithms. This bio-inspired swarm algorithm is called Coral Reef Opmization (CRO)
algorithm and it is based on the behaviour of the corals in a reef.
Finally, to test and validate the proposed model, different domains have been used
such as the N-Queens Problem, the Resource-Constraint Project Scheduling Problem,
the Path Finding problem in Video Games, or the Behavioural Pattern Identification
in users. In some of these domains, the performance of the proposed model has been
compared against a classical Genetic Algorithm to provide a comparative study and
perform an analytical comparison between both approaches.En la actualidad, existen un gran número de problemas que debido a su complejidad
necesitan algoritmos basados en heurísticas para la búsqueda de solucionas subóptimas
(o incluso óptimas). Normalmente, estos problemas presentan una complejidad
NP-completa, por lo que los algoritmos clásicos de búsqueda de soluciones no son
apropiados ya que necesitan una gran cantidad de recursos computacionales, o simplemente,
no son capaces de encontrar alguna solución cuando el problema crece. Ejemplos
clásicos de este tipo de problemas son el problema del vendedor viajero (o TSP
del inglés Travelling Salesman Problem) o el problema de las N-reinas. También se
pueden encontrar ejemplos en dominios reales o industriales que generalmente están
ligados a temas de optimización de sistemas complejos, como pueden ser problemas de
planificación, scheduling, problemas de enrutamiento de vehículos (o VRP del inglés
Vehicle Routing Problem), el diseño de redes Wifi abiertas (o WiFiDP del inglés WiFi
network Design Problem), o la identificación de patrones de comportamiento, entre
otros.
En lo referente a los algoritmos basados en heuristicas, dos paradigmas muy
conocidos son los algoritmos de enjambre (Swarm Intelligence) y la computación
evolutiva (Evolutionary Computation). Ambos paradigmas pertencen al subárea de la
Inteligencia Artificial denominada Inteligencia Computacional, que además contiene
los sistemas difusos, redes neuronales y sistemas inmunológicos artificiales.
Los algoritmos de inteligencia de enjambre, o Swarm Intelligence, se centran en
el comportamiento colectivo de sistemas auto-organizativos. Estos algoritmos se
caracterizan por la generación de inteligencia colectiva a partir del comportamiento,
no muy complejo, de los individuos y los esquemas de comunicación entre ellos.
Algunos ejemplos son particle swarm optimization, ant colony optimization (ACO),
bee colony optimization o bird flocking.
Los algoritmos de colonias de hormigas (o ACO del inglés Ant Colony Optimization)
se basan en el comportamiento de estos insectos en el proceso de recolección de
comida. En este tipo de algoritmos, las hormigas van tomando decisiones a lo largo
de la simulación que les permiten construir su propia solución al problema. Una
vez que una hormiga termina su ejecución, deshace el camino andado depositando en
el entorno feronomas que contienen información sobre la solución construida. Estas
feromonas influirán en las decisiones de futuras hormigas, por lo que produce una
comunicación indirecta utilizando el entorno. A este proceso se le llama estigmergia.
Cuando un algoritmo de hormigas se aplica a alguno de los problemas de optimización
descritos anteriormente, se suele modelar el problema como un grafo sobre el cual
se ejecutarán las hormigas. Sin embargo, la representación basada en grafos
clásica no parece ser la mejor para la ejecución de algoritmos de hormigas porque
presenta algunos problemas importantes. El primer problema está relacionado con
el crecimiento polinómico, o incluso expnomencial, del grafo resultante. El segundo
problema tiene que ver con los problemas que necesitan de variables reales, o de coma
flotante, porque estos problemas, con la representación tradicional basada en grafos,
no pueden ser modelados.
Por otro lado, los algoritmos evolutivos (o EC del inglés Evolutionary Computation)
son un tipo de algoritmos basados en población que están inspirados en el
proceso evolutivo propuesto por Darwin. En este tipo de algoritmos, hay una, o
varias, poblaciones compuestas por individuos diferentes que representan problems
solutiones al problema modelado. Por cada iteración, la población evoluciona mediante
el uso de procedimientos evolutivos, lo que significa que mejores individuos (mejores
soluciones) son creados a lo largo de la ejecución del algoritmo. Ambos tipos de
algorithmos, EC y SI, han sido tradicionalmente aplicados a los problemas NPcompletos
descritos anteriormente. Diferentes estrategias basadas en población han
sido desarrolladas, comparadas e incluso combinadas para el diseño de algoritmos
híbridos.
Esta tesis se ha centrado en el análisis de los modelos clásicos de representación
basada en grafos de problemas complejos para la posterior ejecución de algoritmos
de colonias de hormigas y el desarrollo de un nuevo modelo de hormigas que pretende
suponer un avance en este tipo de algoritmos. En este nuevo modelo, los problemas
son representados en un grafo más compacto que afecta al comportamiento de las
hormigas, el cual se vuelve más complejo. Además, esta reducción en el tamaño
del grafo genera un rápido crecimiento en el número de feronomas creadas. Por
esta razón, una nueva metaheurística (llamada Oblivion Rate) ha sido diseñada para
controlar el número de feromonas almacenadas en el grafo.
En esta tesis, varias metaheuristicas han sido diseñadas para el sistema propuesto y
sus rendimientos han sido comparados. Una de estas metaheurísticas es la Oblivion
Rate basada en una función exponencial que tiene en cuenta el número de feromonas
creadas en el sistema. Otra Oblivion Rate está basada en un algoritmo de enjambre
bio-inspirado que usa algunos conceptos extraídos de la computación evolutiva. Este
algoritmo de enjambre bio-inspirado se llama Optimización de arrecifes de corales (o
CRO del inglés Coral Reef Optimization) y está basado en el comportamiento de los
corales en el arrecife.
Finalmente, para validar y testear el modelo propuesto, se han utilizado diversos
dominios de aplicación como son el problema de las N-reinas, problemas de
planificación de proyectos con restricciones de recursos, problemas de búsqueda de
caminos en entornos de videojuegos y la identificación de patrones de comportamiento
de usuarios. En algunos de estos dominios, el rendimiento del modelo propuesto
ha sido comparado contra un algoritmo genético clásico para realizar un estudio
comparativo, y analítico, entre ambos enfoques
- …