6 research outputs found

    Finding Common Ground When Experts Disagree: Robust Portfolio Decision Analysis

    Full text link

    Analysis of an Algorithm for Identifying Pareto Points in Multi-Dimensional Data Sets. 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization

    No full text
    In this paper we present results from analytical and experimental investigations into the performance of divide & conquer algorithms for determining Pareto points in multidimensional data sets of size n and dimension d. The focus in this work is on the worst-case, where all points are Pareto, but extends to problem sets where only a partial subset of the points is Pareto. Analysis supported by experiment shows that the number of comparisons is bounded by two different curves, one that is O(n (log n)^(d-2)), and the other that is O(n^log 3). Which one is active depends on the relative values of n and d. Also, the number of comparisons is very sensitive to the structure of the data, varying by orders of magnitude for data sets with the same number of Pareto points. Nomenclature n = number of points in a data set d = dimension of the data set TZ … , , = Table of n records, each record having d attributes tz … ,, = A record with d attributes ti, zi, … = The ith attribute in a record DC = Divide & Conquer algorithm pbf[n,d] = estimator for number of comparisons in DC algorithm, n points and dimension d mbf[n,d] = estimator for number of comparisons in marriage step of DC algorithm I I

    Algorithms to Identify Pareto Points in Multi-Dimensional Data Sets

    No full text
    *Signatures are on file in the Graduate School. ii The focus in this research is on developing a fast, efficient hybrid algorithm to identify the Pareto frontier in multi-dimensional data sets. The hybrid algorithm is a blend of two different base algorithms, the Simple Cull (SC) algorithm that has a low overhead but is of overall high computational complexity, and the Divide & Conquer (DC) algorithm that has a lower computational complexity but has a high overhead. The hybrid algorithm employs aspects of each of the two base algorithms, adapting in response to the properties of the data. Each of the two base algorithms perform better for different classes of data, with the SC algorithm performing best for data sets with few nondominated points, high dimensionality, or fewer total numbers of points, while the DC algorithm performs better otherwise. The general approach to the hybrid algorithm is to execute the following steps in order: 1. Execute one pass of the SC algorithm through the data if merited 2. Execute the DC algorithm, which recursively splits the data into smaller problem size
    corecore