417 research outputs found
A hybrid method for capacitated vehicle routing problem
The vehicle routing problem (VRP) is to service a number of customers with a fleet of vehicles. The VRP is an important problem in the fields of transportation, distribution and logistics. Typically the VRP deals with the delivery of some commodities from a depot to a number of customer locations with given demands. The problem frequently arises in many diverse physical distribution situations. For example bus routing, preventive maintenance inspection tours, salesmen routing and the delivery of any commodity such as mail, food or newspapers.We focus on the Symmetric Capacitated Vehicle Routing Problem (CVRP) with a single commodity and one depot. The restrictions are capacity and cost or distance. For large instances, exact computational algorithms for solving the CVRP require considerable CPU time. Indeed, there are no guarantees that the optimal tours will be found within a reasonable CPU time. Hence, using heuristics and meta-heuristics algorithms may be the only approach. For a large CVRP one may have to balance computational time to solve the problem and the accuracy of the obtained solution when choosing the solving technique.This thesis proposes an effective hybrid approach that combines domain reduction with: a greedy search algorithm; the Clarke and Wright algorithm; a simulating annealing algorithm; and a branch and cut method to solve the capacitated vehicle routing problem. The hybrid approach is applied to solve 14 benchmark CVRP instances. The results show that domain reduction can improve the classical Clarke and Wright algorithm by 8% and cut the computational time taken by approximately 50% when combined with branch and cut.Our work in this thesis is organized into 6 chapters. Chapter 1 provides an introduction and general concepts, notation and terminology and a summary of our work. In Chapter 2 we detail a literature review on the CVRP. Some heuristics and exact methods used to solve the problem are discussed. Also, this Chapter describes the constraint programming (CP) technique, some examples of domain reduction, advantages and disadvantage of using CP alone, and the importance of combining CP with MILP exact methods. Chapter 3 provides a simple greedy search algorithm and the results obtained by applying the algorithm to solve ten VRP instances. In Chapter 4 we incorporate domain reduction with the developed heuristic. The greedy algorithm with a restriction on each route combined with domain reduction is applied to solve the ten VRP instances. The obtained results show that the domain reduction improves the solution by an average of 24%. Also, the Chapter shows that the classical Clarke and Wright algorithm could be improve by 8% when combined with domain reduction. Chapter 4 combines domain reduction with a simulating annealing algorithm. In Chapter 4 we use the combination of domain reduction with the greedy algorithm, Clarke and Wright algorithm, and simulating annealing algorithm to solve 4 large CVRP instances.Chapter 5 incorporates the Branch and Cut method with domain reduction. The hybrid approach is applied to solve the 10 CVRP instances that we used in Chapter 4. This Chapter shows that the hybrid approach reduces the CPU time taken to solve the 10 benchmark instances by approximately 50%. Chapter 6 concludes the thesis and provides some ideas for future work. An appendix of the 10 literature problems and generated instances will be provided followed by bibliography
Mixed Order Hyper-Networks for Function Approximation and Optimisation
Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence.
There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method.
This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n ->R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared.
Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP.
A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature
Global Constraint Catalog, 2nd Edition (revision a)
This report presents a catalogue of global constraints where
each constraint is explicitly described in terms of graph properties and/or automata and/or first order logical formulae with arithmetic. When available, it also presents some typical usage as well as some pointers to existing
filtering algorithms
Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field
Global Constraint Catalog, 2nd Edition
This report presents a catalogue of global constraints where
each constraint is explicitly described in terms of graph properties and/or automata and/or first order logical formulae with arithmetic. When available, it also presents some typical usage as well as some pointers to existing
filtering algorithms
Design of Heuristic Algorithms for Hard Optimization
This open access book demonstrates all the steps required to design heuristic algorithms for difficult optimization. The classic problem of the travelling salesman is used as a common thread to illustrate all the techniques discussed. This problem is ideal for introducing readers to the subject because it is very intuitive and its solutions can be graphically represented. The book features a wealth of illustrations that allow the concepts to be understood at a glance. The book approaches the main metaheuristics from a new angle, deconstructing them into a few key concepts presented in separate chapters: construction, improvement, decomposition, randomization and learning methods. Each metaheuristic can then be presented in simplified form as a combination of these concepts. This approach avoids giving the impression that metaheuristics is a non-formal discipline, a kind of cloud sculpture. Moreover, it provides concrete applications of the travelling salesman problem, which illustrate in just a few lines of code how to design a new heuristic and remove all ambiguities left by a general framework. Two chapters reviewing the basics of combinatorial optimization and complexity theory make the book self-contained. As such, even readers with a very limited background in the field will be able to follow all the content
Statistical physics of neural systems
The ability of processing and storing information is considered a characteristic
trait of intelligent systems. In biological neural networks, learning is strongly
believed to take place at the synaptic level, in terms of modulation of synaptic
efficacy. It can be thus interpreted as the expression of a collective phenomena,
emerging when neurons connect each other in constituting a complex network of
interactions. In this work, we represent learning as an optimization problem, actually
implementing a local search, in the synaptic space, of specific configurations, known
as solutions and making a neural network able to accomplish a series of different
tasks. For instance, we would like the network to adapt the strength of its synaptic
connections, in order to be capable of classifying a series of objects, by assigning to
each object its corresponding class-label. Supported by a series of experiments, it
has been suggested that synapses may exploit a very few number of synaptic states
for encoding information. It is known that this feature makes learning in neural
networks a challenging task. Extending the large deviation analysis performed in
the extreme case of binary synaptic couplings, in this work, we prove the existence
of regions of the phase space, where solutions are organized in extremely dense
clusters. This picture turns out to be invariant to the tuning of all the parameters of
the model. Solutions within the clusters are more robust to noise, thus enhancing the
learning performances. This has inspired the design of new learning algorithms, as
well as it has clarified the effectiveness of the previously proposed ones. We further
provide quantitative evidence that the gain achievable when considering a greater
number of available synaptic states for encoding information, is consistent only up
to a very few number of bits. This is in line with the above mentioned experimental
results. Besides the challenging aspect of low precision synaptic connections, it is
also known that the neuronal environment is extremely noisy. Whether stochasticity
can enhance or worsen the learning performances is currently matter of debate. In
this work, we consider a neural network model where the synaptic connections are random variables, sampled according to a parametrized probability distribution.
We prove that, this source of stochasticity naturally drives towards regions of the
phase space at high densities of solutions. These regions are directly accessible by
means of gradient descent strategies, over the parameters of the synaptic couplings
distribution. We further set up a statistical physics analysis, through which we
show that solutions in the dense regions are characterized by robustness and good
generalization performances. Stochastic neural networks are also capable of building
abstract representations of input stimuli and then generating new input samples,
according to the inferred statistics of the input signal. In this regard, we propose a
new learning rule, called Delayed Correlation Matching (DCM), that relying on the
matching between time-delayed activity correlations, makes a neural network able
to store patterns of neuronal activity. When considering hidden neuronal states, the
DCM learning rule is also able to train Restricted Boltzmann Machines as generative
models. In this work, we further require the DCM learning rule to fulfil some
biological constraints, such as locality, sparseness of the neural coding and the Dale’s
principle. While retaining all these biological requirements, the DCM learning
rule has shown to be effective for different network topologies, and in both on-line
learning regimes and presence of correlated patterns. We further show that it is also
able to prevent the creation of spurious attractor states
Quantum computing for finance
Quantum computers are expected to surpass the computational capabilities of
classical computers and have a transformative impact on numerous industry
sectors. We present a comprehensive summary of the state of the art of quantum
computing for financial applications, with particular emphasis on stochastic
modeling, optimization, and machine learning. This Review is aimed at
physicists, so it outlines the classical techniques used by the financial
industry and discusses the potential advantages and limitations of quantum
techniques. Finally, we look at the challenges that physicists could help
tackle
Traveling Salesman Problem
This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering
- …