1,459,903 research outputs found

    Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point

    Full text link
    Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing {\mu} points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations

    Simultaneous Search

    Get PDF
    We introduce and solve a new class of "downward-recursive" static portfolio choice problems. An individual simultaneously chooses among ranked stochastic options, and each choice is costly. In the motivational application, just one may be exercised from those that succeed. This often emerges in practice, such as when a student applies to many colleges. We show that a greedy algorithm finds the optimal set. The optimal choices are "less aggressive" than the sequentially optimal ones, but "more aggressive" than the best singletons. The optimal set in general contains gaps. We provide a comparative static on the chosen set.college application, submodular optimization, greedy algorithm, directed search

    Choice of Law in Federal Courts: From \u3ci\u3eErie\u3c/i\u3e and \u3ci\u3eKlaxon\u3c/i\u3e to CAFA and \u3ci\u3eShady Grove\u3c/i\u3e

    Get PDF
    The article offers a new perspective on choice of law in federal courts. I have argued in a series of articles that ordinary choice of law problems are best understood through application of a particular conceptual framework, which I call the two-step model. Rather than thinking of choice of law as some sort of meta-procedure, this model takes it to address two substantive questions: what are the scope of the competing states’ laws, and which should be given priority if they conflict? My previous articles have explored the utility of this framework for tackling some perennial problems in choice of law. This one moves to a different context: choice of law in federal courts under the Erie doctrine. It argues that Erie is best understood as a straightforward application of this two-step model and that the model consequently offers a useful guide for Erie analysis. It shows how thinking about the Erie question in this way offers novel and satisfying solutions to a number of puzzles that have troubled courts and commentators in the wake of Erie. These puzzles include the effect that federal courts must give to state choice of law rules (the Klaxon issue), how Klaxon should interact with the Class Action Fairness Act of 2005, and the Court’s most recent venture into the Erie arena, Shady Grove v. Allstate. These issues have received substantial attention in the scholarly literature, but never from the two-step perspective

    uDecide: A protégé plugin for multiattribute decision making

    Get PDF
    This paper introduces the Protege plugin uDecide. With the help of uDecide it is possible to solve multi-attribute decision making problems encoded in a straight forward extension of standard Description Logics. The formalism allows to specify background knowledge in terms of an ontology, while each attribute is represented as a weighted class expression. On top of such an approach one can compute the best choice (or the best k-choices) taking background knowledge into account in the appropriate way. We show how to implement the approach on top of existing semantic web technologies and demonstrate its benefits with the help of an interesting use case that illustrates how to convert an existing web resource into an expert system with the help of uDecide

    Closed form optimized transmission conditions for complex diffusion with many subdomains

    Get PDF
    Optimized transmission conditions in domain decomposition methods have been the focus of intensive research efforts over the past decade. Traditionally, transmission conditions are optimized for two subdomain model configurations, and then used in practice for many subdomains. We optimize here transmission conditions for the first time directly for many subdomains for a class of complex diffusion problems. Our asymptotic analysis leads to closed form optimized transmission conditions for many subdomains, and shows that the asymptotic best choice in the mesh size only differs from the two subdomain best choice in the constants, for which we derive the dependence on the number of subdomains explicitly, including the limiting case of an infinite number of subdomains, leading to new insight into scalability. Our results include both Robin and Ventcell transmission conditions, and we also optimize for the first time a two-sided Ventcell condition. We illustrate our results with numerical experiments, both for situations covered by our analysis and situations that go beyond
    • …
    corecore