46,941 research outputs found

    Branch-and-Prune Search Strategies for Numerical Constraint Solving

    Get PDF
    When solving numerical constraints such as nonlinear equations and inequalities, solvers often exploit pruning techniques, which remove redundant value combinations from the domains of variables, at pruning steps. To find the complete solution set, most of these solvers alternate the pruning steps with branching steps, which split each problem into subproblems. This forms the so-called branch-and-prune framework, well known among the approaches for solving numerical constraints. The basic branch-and-prune search strategy that uses domain bisections in place of the branching steps is called the bisection search. In general, the bisection search works well in case (i) the solutions are isolated, but it can be improved further in case (ii) there are continuums of solutions (this often occurs when inequalities are involved). In this paper, we propose a new branch-and-prune search strategy along with several variants, which not only allow yielding better branching decisions in the latter case, but also work as well as the bisection search does in the former case. These new search algorithms enable us to employ various pruning techniques in the construction of inner and outer approximations of the solution set. Our experiments show that these algorithms speed up the solving process often by one order of magnitude or more when solving problems with continuums of solutions, while keeping the same performance as the bisection search when the solutions are isolated.Comment: 43 pages, 11 figure

    Formal Proofs for Nonlinear Optimization

    Get PDF
    We present a formally verified global optimization framework. Given a semialgebraic or transcendental function ff and a compact semialgebraic domain KK, we use the nonlinear maxplus template approximation algorithm to provide a certified lower bound of ff over KK. This method allows to bound in a modular way some of the constituents of ff by suprema of quadratic forms with a well chosen curvature. Thus, we reduce the initial goal to a hierarchy of semialgebraic optimization problems, solved by sums of squares relaxations. Our implementation tool interleaves semialgebraic approximations with sums of squares witnesses to form certificates. It is interfaced with Coq and thus benefits from the trusted arithmetic available inside the proof assistant. This feature is used to produce, from the certificates, both valid underestimators and lower bounds for each approximated constituent. The application range for such a tool is widespread; for instance Hales' proof of Kepler's conjecture yields thousands of multivariate transcendental inequalities. We illustrate the performance of our formal framework on some of these inequalities as well as on examples from the global optimization literature.Comment: 24 pages, 2 figures, 3 table

    Brain image clustering by wavelet energy and CBSSO optimization algorithm

    Get PDF
    Previously, the diagnosis of brain abnormality was significantly important in the saving of social and hospital resources. Wavelet energy is known as an effective feature detection which has great efficiency in different utilities. This paper suggests a new method based on wavelet energy to automatically classify magnetic resonance imaging (MRI) brain images into two groups (normal and abnormal), utilizing support vector machine (SVM) classification based on chaotic binary shark smell optimization (CBSSO) to optimize the SVM weights. The results of the suggested CBSSO-based KSVM are compared favorably to several other methods in terms of better sensitivity and authenticity. The proposed CAD system can additionally be utilized to categorize the images with various pathological conditions, types, and illness modes

    AMPSO: A new Particle Swarm Method for Nearest Neighborhood Classification

    Get PDF
    Nearest prototype methods can be quite successful on many pattern classification problems. In these methods, a collection of prototypes has to be found that accurately represents the input patterns. The classifier then assigns classes based on the nearest prototype in this collection. In this paper, we first use the standard particle swarm optimizer (PSO) algorithm to find those prototypes. Second, we present a new algorithm, called adaptive Michigan PSO (AMPSO) in order to reduce the dimension of the search space and provide more flexibility than the former in this application. AMPSO is based on a different approach to particle swarms as each particle in the swarm represents a single prototype in the solution. The swarm does not converge to a single solution; instead, each particle is a local classifier, and the whole swarm is taken as the solution to the problem. It uses modified PSO equations with both particle competition and cooperation and a dynamic neighborhood. As an additional feature, in AMPSO, the number of prototypes represented in the swarm is able to adapt to the problem, increasing as needed the number of prototypes and classes of the prototypes that make the solution to the problem. We compared the results of the standard PSO and AMPSO in several benchmark problems from the University of California, Irvine, data sets and find that AMPSO always found a better solution than the standard PSO. We also found that it was able to improve the results of the Nearest Neighbor classifiers, and it is also competitive with some of the algorithms most commonly used for classification.This work was supported by the Spanish founded research Project MSTAR::UC3M, Ref: TIN2008-06491-C04-03 and CAM Project CCG06-UC3M/ESP-0774.Publicad

    Polyhedral Predictive Regions For Power System Applications

    Get PDF
    Despite substantial improvement in the development of forecasting approaches, conditional and dynamic uncertainty estimates ought to be accommodated in decision-making in power system operation and market, in order to yield either cost-optimal decisions in expectation, or decision with probabilistic guarantees. The representation of uncertainty serves as an interface between forecasting and decision-making problems, with different approaches handling various objects and their parameterization as input. Following substantial developments based on scenario-based stochastic methods, robust and chance-constrained optimization approaches have gained increasing attention. These often rely on polyhedra as a representation of the convex envelope of uncertainty. In the work, we aim to bridge the gap between the probabilistic forecasting literature and such optimization approaches by generating forecasts in the form of polyhedra with probabilistic guarantees. For that, we see polyhedra as parameterized objects under alternative definitions (under L1L_1 and LL_\infty norms), the parameters of which may be modelled and predicted. We additionally discuss assessing the predictive skill of such multivariate probabilistic forecasts. An application and related empirical investigation results allow us to verify probabilistic calibration and predictive skills of our polyhedra.Comment: 8 page

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    Forecasting day-ahead electricity prices in Europe: the importance of considering market integration

    Full text link
    Motivated by the increasing integration among electricity markets, in this paper we propose two different methods to incorporate market integration in electricity price forecasting and to improve the predictive performance. First, we propose a deep neural network that considers features from connected markets to improve the predictive accuracy in a local market. To measure the importance of these features, we propose a novel feature selection algorithm that, by using Bayesian optimization and functional analysis of variance, evaluates the effect of the features on the algorithm performance. In addition, using market integration, we propose a second model that, by simultaneously predicting prices from two markets, improves the forecasting accuracy even further. As a case study, we consider the electricity market in Belgium and the improvements in forecasting accuracy when using various French electricity features. We show that the two proposed models lead to improvements that are statistically significant. Particularly, due to market integration, the predictive accuracy is improved from 15.7% to 12.5% sMAPE (symmetric mean absolute percentage error). In addition, we show that the proposed feature selection algorithm is able to perform a correct assessment, i.e. to discard the irrelevant features

    LightDock: a new multi-scale approach to protein–protein docking

    Get PDF
    Computational prediction of protein–protein complex structure by docking can provide structural and mechanistic insights for protein interactions of biomedical interest. However, current methods struggle with difficult cases, such as those involving flexible proteins, low-affinity complexes or transient interactions. A major challenge is how to efficiently sample the structural and energetic landscape of the association at different resolution levels, given that each scoring function is often highly coupled to a specific type of search method. Thus, new methodologies capable of accommodating multi-scale conformational flexibility and scoring are strongly needed. We describe here a new multi-scale protein–protein docking methodology, LightDock, capable of accommodating conformational flexibility and a variety of scoring functions at different resolution levels. Implicit use of normal modes during the search and atomic/coarse-grained combined scoring functions yielded improved predictive results with respect to state-of-the-art rigid-body docking, especially in flexible cases.B.J-G was supported by a FPI fellowship from the Spanish Ministry of Economy and Competitiveness. This work was supported by I+D+I Research Project grants BIO2013-48213-R and BIO2016-79930-R from the Spanish Ministry of Economy and Competitiveness. This work is partially supported by the European Union H2020 program through HiPEAC (GA 687698), by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology (TIN2015-65316-P) and the Departament d’Innovació, Universitats i Empresa de la Generalitat de Catalunya, under project MPEXPAR: Models de Programaciói Entorns d’Execució Paral·lels (2014-SGR-1051).Peer ReviewedPostprint (author's final draft
    corecore