128 research outputs found

    General models in min-max continous location

    Get PDF
    In this paper, a class of min-max continuous location problems is discussed. After giving a complete characterization of th stationary points, we propose a simple central and deep-cut ellipsoid algorithm to solve these problems for the quasiconvex case. Moreover, an elementary convergence proof of this algorithm and some computational results are presented

    On the selection of the globally optimal prototype subset for nearest-neighbor classification

    Get PDF
    The nearest-neighbor classifier has been shown to be a powerful tool for multiclass classification. We explore both theoretical properties and empirical behavior of a variant method, in which the nearest-neighbor rule is applied to a reduced set of prototypes. This set is selected a priori by fixing its cardinality and minimizing the empirical misclassification cost. In this way we alleviate the two serious drawbacks of the nearest-neighbor method: high storage requirements and time-consuming queries. Finding this reduced set is shown to be NP-hard. We provide mixed integer programming (MIP) formulations, which are theoretically compared and solved by a standard MIP solver for small problem instances. We show that the classifiers derived from these formulations are comparable to benchmark procedures. We solve large problem instances by a metaheuristic that yields good classification rules in reasonable time. Additional experiments indicate that prototype-based nearest-neighbor classifiers remain quite stable in the presence of missing values

    Incorporating Neighborhood Reduction for the Solution of the Planar p-Median Problem

    Get PDF
    Two efficient neighbourhood reduction schemes are proposed for the solution of the p-Median problem on the plane. Their integration into a local search significantly reduces the run time with an insignificant deterioration in the quality of the solution. For completeness this fast local search is also embedded into one of the most powerful meta-heuristic algorithms recently developed for this continuous location problem. Excellent results for instances with up to 1060 demand points with various values of p are reported. Eight new best known solutions for ten instances of a large problem with 3,038 demand points and up to 500 facilities are also found

    Maximizing upgrading and downgrading margins for ordinal regression

    Get PDF
    In ordinal regression, a score function and threshold values are sought to classify a set of objects into a set of ranked classes. Classifying an individual in a class with higher (respectively lower) rank than its actual rank is called an upgrading (respectively downgrading) error. Since upgrading and downgrading errors may not have the same importance, they should be considered as two different criteria to be taken into account when measuring the quality of a classifier. In Support Vector Machines, margin maximization is used as an effective and computationally tractable surrogate of the minimization of misclassification errors. As an extension, we consider in this paper the maximization of upgrading and downgrading margins as a surrogate of the minimization of upgrading and downgrading errors, and we address the biobjective problem of finding a classifier maximizing simultaneously the two margins. The whole set of Pareto-optimal solutions of such biobjective problem is described as translations of the optimal solutions of a scalar optimization problem. For the most popular case in which the Euclidean norm is considered, the scalar problem has a unique solution, yielding that all the Pareto-optimal solutions of the biobjective problem are translations of each other. Hence, the Pareto-optimal solutions can easily be provided to the analyst, who, after inspection of the misclassification errors caused, should choose in a later stage the most convenient classifier. The consequence of this analysis is that it provides a theoretical foundation for a popular strategy among practitioners, based on the so-called ROC curve, which is shown here to equal the set of Pareto-optimal solutions of maximizing simultaneously the downgrading and upgrading margins

    Neighbourhood Reduction in Global and Combinatorial Optimization: The Case of the p-Centre Problem

    Get PDF
    Neighbourhood reductions for a class of location problems known as the vertex (or discrete) and planar (or continuous) p-centre problems are presented. A brief review of these two forms of the p-centre problem is first provided followed by those respective reduction schemes that have shown to be promising. These reduction schemes have the power of transforming optimal or near optimal methods such as metaheuristics or relaxation-based procedures, which were considered relatively slow, into efficient and exciting ones that are now able to find optimal solutions or tight lower/upper bounds for larger instances. Research highlights of neighbourhood reduction for global and combinatorial optimisation problems in general and for related location problems in particular are also given

    A note towards improved homeland defense

    No full text
    We indicate a number of shortcomings in the second stage in the approach of Bell et al. [1] that may lead to unsatisfactory results. Some possible remedies are investigated.Decision making/process Location Optimization Set covering Allocation
    corecore