3,134 research outputs found
Stochastic Optimization in Econometric Models – A Comparison of GA, SA and RSG
This paper shows that, in case of an econometric model with a high sensitivity to data, using stochastic optimization algorithms is better than using classical gradient techniques. In addition, we showed that the Repetitive Stochastic Guesstimation (RSG) algorithm –invented by Charemza-is closer to Simulated Annealing (SA) than to Genetic Algorithms (GAs), so we produced hybrids between RSG and SA to study their joint behavior. The evaluation of all algorithms involved was performed on a short form of the Romanian macro model, derived from Dobrescu (1996). The subject of optimization was the model’s solution, as function of the initial values (in the first stage) and of the objective functions (in the second stage). We proved that a priori information help “elitist “ algorithms (like RSG and SA) to obtain best results; on the other hand, when one has equal believe concerning the choice among different objective functions, GA gives a straight answer. Analyzing the average related bias of the model’s solution proved the efficiency of the stochastic optimization methods presented.underground economy, Laffer curve, informal activity, fiscal policy, transitionmacroeconomic model, stochastic optimization, evolutionary algorithms, Repetitive Stochastic Guesstimation
Fifty Years of Candidate Pulsar Selection - What next?
For fifty years astronomers have been searching for pulsar signals in
observational data. Throughout this time the process of choosing detections
worthy of investigation, so called candidate selection, has been effective,
yielding thousands of pulsar discoveries. Yet in recent years technological
advances have permitted the proliferation of pulsar-like candidates, straining
our candidate selection capabilities, and ultimately reducing selection
accuracy. To overcome such problems, we now apply intelligent machine learning
tools. Whilst these have achieved success, candidate volumes continue to
increase, and our methods have to evolve to keep pace with the change. This
talk considers how to meet this challenge as a community.Comment: 4 pages, submitted: Proceedings of Pulsar Astrophysics: The Next
Fifty Years, IAU Symposium 33
Recommended from our members
Local search: A guide for the information retrieval practitioner
There are a number of combinatorial optimisation problems in information retrieval in which the use of local search methods are worthwhile. The purpose of this paper is to show how local search can be used to solve some well known tasks in information retrieval (IR), how previous research in the field is piecemeal, bereft of a structure and methodologically flawed, and to suggest more rigorous ways of applying local search methods to solve IR problems. We provide a query based taxonomy for analysing the use of local search in IR tasks and an overview of issues such as fitness functions, statistical significance and test collections when conducting experiments on combinatorial optimisation problems. The paper gives a guide on the pitfalls and problems for IR practitioners who wish to use local search to solve their research issues, and gives practical advice on the use of such methods. The query based taxonomy is a novel structure which can be used by the IR practitioner in order to examine the use of local search in IR
Search Heuristics, Case-Based Reasoning and Software Project Effort Prediction
This paper reports on the use of search techniques to help optimise a case-based reasoning (CBR) system for predicting software project effort. A major problem, common to ML techniques in general, has been dealing with large numbers of case features, some of which can hinder the prediction process. Unfortunately searching for the optimal feature subset is a combinatorial problem and therefore NP-hard. This paper examines the use of random searching, hill climbing and forward sequential selection (FSS) to tackle this problem. Results from examining a set of real software project data show that even random searching was better than using all available for features (average error 35.6% rather than 50.8%). Hill climbing and FSS both produced results substantially better than the random search (15.3 and 13.1% respectively), but FSS was more computationally efficient. Providing a description of the fitness landscape of a problem along with search results is a step towards the classification of search problems and their assignment to optimum search techniques. This paper attempts to describe the fitness landscape of this problem by combining the results from random searches and hill climbing, as well as using multi-dimensional scaling to aid visualisation. Amongst other findings, the visualisation results suggest that some form of heuristic-based initialisation might prove useful for this problem
On a Feasible–Infeasible Two-Population (FI-2Pop) Genetic Algorithm for Constrained Optimization: Distance Tracing and no Free Lunch
We explore data-driven methods for gaining insight into the dynamics of a two-population genetic algorithm (GA), which has been effective in tests on constrained optimization problems. We track and compare one population of feasible solutions and another population of infeasible solutions. Feasible solutions are selected and bred to improve their objective function values. Infeasible solutions are selected and bred to reduce their constraint violations. Interbreeding between populations is completely indirect, that is, only through their offspring that happen to migrate to the other population. We introduce an empirical measure of distance, and apply it between individuals and between population centroids to monitor the progress of evolution. We find that the centroids of the two populations approach each other and stabilize. This is a valuable characterization of convergence. We find the infeasible population influences, and sometimes dominates, the genetic material of the optimum solution. Since the infeasible population is not evaluated by the objective function, it is free to explore boundary regions, where the optimum is likely to be found. Roughly speaking, the No Free Lunch theorems for optimization show that all blackbox algorithms (such as Genetic Algorithms) have the same average performance over the set of all problems. As such, our algorithm would, on average, be no better than random search or any other blackbox search method. However, we provide two general theorems that give conditions that render null the No Free Lunch results for the constrained optimization problem class we study. The approach taken here thereby escapes the No Free Lunch implications, per se
- …