1,965 research outputs found
Free Lunch for Optimisation under the Universal Distribution
Function optimisation is a major challenge in computer science. The No Free
Lunch theorems state that if all functions with the same histogram are assumed
to be equally probable then no algorithm outperforms any other in expectation.
We argue against the uniform assumption and suggest a universal prior exists
for which there is a free lunch, but where no particular class of functions is
favoured over another. We also prove upper and lower bounds on the size of the
free lunch
Investigating Machine Learning Techniques for Solving Product-line Optimization Problems
Product-line optimization using consumers’ preferences measured by conjoint analysis is an important issue to marketing researchers. Since it is a combinatorial NP-hard optimization problem, several meta-heuristics have been proposed to ensure at least near-optimal solutions. This work presents already used meta-heuristics in the context of product-line optimization like genetic algorithms, simulated annealing, particle-swarm optimization, and ant-colony optimization. Furthermore, other promising approaches like harmony search, multiverse optimizer and memetic algorithms are introduced to the topic. All of these algorithms are applied to a function for maximizing profits with a probabilistic choice rule. The performances of the meta-heuristics are measured in terms of best and average solution quality. To determine the most suitable metaheuristics for the underlying objective function, a Monte Carlo simulation for several different problem instances with simulated data is performed. Simulation results suggest the use of genetic algorithms, simulated annealing and memetic algorithms for product-line optimization
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
The Optimality of Blocking Designs in Equally and Unequally Allocated Randomized Experiments with General Response
We consider the performance of the difference-in-means estimator in a two-arm
randomized experiment under common experimental endpoints such as continuous
(regression), incidence, proportion and survival. We examine performance under
both equal and unequal allocation to treatment groups and we consider both the
Neyman randomization model and the population model. We show that in the Neyman
model, where the only source of randomness is the treatment manipulation, there
is no free lunch: complete randomization is minimax for the estimator's mean
squared error. In the population model, where each subject experiences response
noise with zero mean, the optimal design is the deterministic perfect-balance
allocation. However, this allocation is generally NP-hard to compute and
moreover, depends on unknown response parameters. When considering the tail
criterion of Kapelner et al. (2021), we show the optimal design is less random
than complete randomization and more random than the deterministic
perfect-balance allocation. We prove that Fisher's blocking design provides the
asymptotically optimal degree of experimental randomness. Theoretical results
are supported by simulations in all considered experimental settings.Comment: 33 pages, 1 figure, 2 table
On the Consequences of the "No Free Lunch" Theorem for Optimization on the Choice of an Appropriate MDO Architecture
Multidisciplinary design optimization (MDO) based on high- delity models is challenging due to the high computational cost of evaluating the objective and constraints. To choose the best MDO architecture, a trial-and-error approach is not possible due to the high cost of the overall optimization and complexity of the implementation. We propose to address this issue by developing a generic methodology that applies to any (potentially expensive) physical problem and generates a scalable approximation that can be quickly computed, for which the input and output dimensions may be set independently. This facilitates evaluation of MDO architectures for the original MDO problem by capturing its structure and behavior. The methodology is applied to two academic MDO test cases: the Supersonic Business Jet problem and the propane combustion problem. Well-known architectures (MDF, IDF and BLISS) are benchmarked on various instances to demonstrate the dependency between the performance of the architecture and the problem dimensions
- …