485 research outputs found

    New approach on global optimization problems based on meta-heuristic algorithm and quasi-Newton method

    Get PDF
    This paper presents an innovative approach in finding an optimal solution of multimodal and multivariable function for global optimization problems that involve complex and inefficient second derivatives. Artificial bees colony (ABC) algorithm possessed good exploration search, but the major weakness at its exploitation stage. The proposed algorithms improved the weakness of ABC algorithm by hybridized with the most effective gradient based method which are Davidon-Flecher-Powell (DFP) and Broyden-Flecher-Goldfarb-Shanno (BFGS) algorithms. Its distinguished features include maximizing the employment of possible information related to the objective function obtained at previous iterations. The proposed algorithms have been tested on a large set of benchmark global optimization problems and it has shown a satisfactory computational behaviour and it has succeeded in enhancing the algorithm to obtain the solution for global optimization problems

    A hybrid approach to constrained global optimization

    Get PDF
    In this paper, we propose a novel hybrid global optimization method to solve constrained optimization problems. An exact penalty function is first applied to approximate the original constrained optimization problem by a sequence of optimization problems with bound constraints. To solve each of these box constrained optimization problems, two hybrid methods are introduced, where two different strategies are used to combine limited memory BFGS (L-BFGS) with Greedy Diffusion Search (GDS). The convergence issue of the two hybrid methods is addressed. To evaluate the effectiveness of the proposed algorithm, 18 box constrained and 4 general constrained problems from the literature are tested. Numerical results obtained show that our proposed hybrid algorithm is more effective in obtaining more accurate solutions than those compared to

    Domain Adaptation for Statistical Classifiers

    Full text link
    The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the "in-domain" test data is drawn from a distribution that is related, but not identical, to the "out-of-domain" distribution of the training data. We consider the common case in which labeled out-of-domain data is plentiful, but labeled in-domain data is scarce. We introduce a statistical formulation of this problem in terms of a simple mixture model and present an instantiation of this framework to maximum entropy classifiers and their linear chain counterparts. We present efficient inference algorithms for this special case based on the technique of conditional expectation maximization. Our experimental results show that our approach leads to improved performance on three real world tasks on four different data sets from the natural language processing domain

    EmiR: Evolutionary minimization for R

    Get PDF
    Classical minimization methods, like the steepest descent or quasi-Newton techniques, have been proved to struggle in dealing with optimization problems with a high-dimensional search space or subject to complex nonlinear constraints. In the last decade, the interest on metaheuristic nature-inspired algorithms has been growing steadily, due to their flexibility and effectiveness. In this paper we present EmiR, a package for R which implements several metaheuristic algorithms for optimization problems. Unlike other available tools, EmiR can be used not only for unconstrained problems, but also for problems subjected to inequality constraints and for integer or mixed-integer problems. Main features of EmiR, its usage and the comparison with other available tools are presented
    • …
    corecore