10,199 research outputs found
Polynomial tuning of multiparametric combinatorial samplers
Boltzmann samplers and the recursive method are prominent algorithmic
frameworks for the approximate-size and exact-size random generation of large
combinatorial structures, such as maps, tilings, RNA sequences or various
tree-like structures. In their multiparametric variants, these samplers allow
to control the profile of expected values corresponding to multiple
combinatorial parameters. One can control, for instance, the number of leaves,
profile of node degrees in trees or the number of certain subpatterns in
strings. However, such a flexible control requires an additional non-trivial
tuning procedure. In this paper, we propose an efficient polynomial-time, with
respect to the number of tuned parameters, tuning algorithm based on convex
optimisation techniques. Finally, we illustrate the efficiency of our approach
using several applications of rational, algebraic and P\'olya structures
including polyomino tilings with prescribed tile frequencies, planar trees with
a given specific node degree distribution, and weighted partitions.Comment: Extended abstract, accepted to ANALCO2018. 20 pages, 6 figures,
colours. Implementation and examples are available at [1]
https://github.com/maciej-bendkowski/boltzmann-brain [2]
https://github.com/maciej-bendkowski/multiparametric-combinatorial-sampler
Portfolio selection using neural networks
In this paper we apply a heuristic method based on artificial neural networks
in order to trace out the efficient frontier associated to the portfolio
selection problem. We consider a generalization of the standard Markowitz
mean-variance model which includes cardinality and bounding constraints. These
constraints ensure the investment in a given number of different assets and
limit the amount of capital to be invested in each asset. We present some
experimental results obtained with the neural network heuristic and we compare
them to those obtained with three previous heuristic methods.Comment: 12 pages; submitted to "Computers & Operations Research
Answer-set programming as a new approach to event-sequence testing
In many applications, faults are triggered by events that occur in a particular order. Based on the assumption that most bugs are caused by the interaction of a low number of events, Kuhn et al. recently introduced sequence covering arrays (SCAs) as suitable designs for event sequence testing. In practice, directly applying SCAs for testing is often impaired by additional constraints, and SCAs have to be adapted to fit application-specific needs. Modifying precomputed SCAs to account for problem variations can be problematic, if not impossible, and developing dedicated algorithms is costly. In this paper, we propose answer-set programming (ASP), a well-known knowledge-representation formalism from the area of artificial intelligence based on logic programming, as a declarative paradigm for computing SCAs. Our approach allows to concisely state complex coverage criteria in an elaboration tolerant way, i.e., small variations of a problem specification require only small modifications of the ASP representation
Optimising a nonlinear utility function in multi-objective integer programming
In this paper we develop an algorithm to optimise a nonlinear utility
function of multiple objectives over the integer efficient set. Our approach is
based on identifying and updating bounds on the individual objectives as well
as the optimal utility value. This is done using already known solutions,
linear programming relaxations, utility function inversion, and integer
programming. We develop a general optimisation algorithm for use with k
objectives, and we illustrate our approach using a tri-objective integer
programming problem.Comment: 11 pages, 2 tables; v3: minor revisions, to appear in Journal of
Global Optimizatio
Postponing Branching Decisions
Solution techniques for Constraint Satisfaction and Optimisation Problems
often make use of backtrack search methods, exploiting variable and value
ordering heuristics. In this paper, we propose and analyse a very simple method
to apply in case the value ordering heuristic produces ties: postponing the
branching decision. To this end, we group together values in a tie, branch on
this sub-domain, and defer the decision among them to lower levels of the
search tree. We show theoretically and experimentally that this simple
modification can dramatically improve the efficiency of the search strategy.
Although in practise similar methods may have been applied already, to our
knowledge, no empirical or theoretical study has been proposed in the
literature to identify when and to what extent this strategy should be used.Comment: 11 pages, 3 figure
Combinatorial optimisation of a large, constrained simulation model: an application of compressed annealing
Simulation models are valuable tools in the analysis of complex, highly constrained economic systems unsuitable for solution by mathematical programming. However, model size may hamper the efforts of practitioners to efficiently identify the most valuable configurations. This paper investigates the efficacy of a new metaheuristic procedure, compressed annealing, for the solution of large, constrained systems. This algorithm is used to investigate the value of incorporating a sown annual pasture, French serradella (Ornithopus sativa Brot. cv. Cadiz), between extended cropping sequences in the central wheat belt of Western Australia. Compressed annealing is shown to be a reliable means of considering constraints in complex optimisation problems in agricultural economics. It is also highlighted that the value of serradella to dryland crop rotations increases with the initial weed burden and the profitability of livestock production.combinatorial optimisation, crop rotation, simulated annealing, Research Methods/ Statistical Methods, C63, Q15,
SimCrime: A Spatial Microsimulation Model for the Analysing of Crime in Leeds.
This Working Paper is a part of PhD thesis 'Modelling Crime: A Spatial Microsimulation Approach' which aims to investigate the potential of spatial microsimulation for modelling crime. This Working Paper presents SimCrime, a static spatial microsimulation model for crime in Leeds. It is designed to estimate the likelihood of being a victim of crime and crime rates at the small area level in Leeds and to answer what-if questions about the effects of changes in the demographic and socio-economic characteristics of the future population. The model is based on individual microdata. Specifically, SimCrime combines individual microdata from the British Crime Survey (BCS) for which location data is only at the scale of large areas, with census statistics for smaller areas to create synthetic microdata estimates for output areas ?(OAs) in Leeds using a simulated annealing method. The new microdata dataset includes all the attributes from the original datasets. This allows variables such as crime victimisation from the BCS to be directly estimated for OAs
- …