19 research outputs found

    Matroids, Matchings, and Fairness

    No full text
    The need for fairness in machine learning algorithms is increasingly critical. A recent focus has been on developing fair versions of classical algorithms, such as those for bandit learning, regression, and clustering. We extend this line of work to include algorithms for optimization subject to one or multiple matroid constraints. We map out this problem space, showing optimal solutions, approximation algorithms, or hardness results depending on the specific problem flavor. Our algorithms are efficient and empirical experiments demonstrate that fairness is achievable without a large compromise to the overall objective

    Recent Advances in Multiobjective Optimization

    No full text
    Multiobjective (or multicriteria) optimization is a research area with rich history and under heavy investigation within Operations Research and Economics in the last 60 years [1,2]. Its object of study is to investigate solutions to combinatorial optimization problems that are evaluated under several objective functions – typically defined on multidimensional attribute (cost) vectors. In multiobjective optimization, we are interested not in finding a single optimal solution, but in computing the trade-off among the different objective functions, called the Pareto set (or curve) P, which is the set of all feasible solutions whose vector of the various objectives is not dominated by any other solution. Multiobjective optimization problems are usually NP-hard due to the fact that the Pareto set is typically exponential in size (even in the case of two objectives). On the other hand, even if a decision maker is armed with the entire Pareto set, s/he is still left with the problem of which is the “best ” solution for the application at hand. Consequently, three natural approaches to deal with multiobjective optimization problems are to

    User preferences for approximation-guided multi-objective evolution

    No full text
    LNCS, volume 8886Incorporating user preferences into evolutionary multi-objective evolutionary algorithms has been an important topic in recent research in the area of evolutionary multi-objective optimization. We present a very simple and yet very effective modification to the Approximation- Guided Evolution (AGE) algorithm to incorporate user preferences. Over a wide range of test functions, we observed that the resulting algorithm called iAGE is just as good at finding evenly distributed solutions as similarly modified NSGA-II and SPEA2 variants. However, in particular for ”difficult” two-objective problems and for all three-objective problems we see more evenly distributed solutions in the user preferred region when using iAGE.Anh Quang Nguyen, Markus Wagner, Frank Neuman

    Goal-Driven Collaborative Filtering – A Directional Error Based Approach

    No full text
    Abstract. Collaborative filtering is one of the most effective techniques for making personalized content recommendation. In the literature, a common experimental setup in the modeling phase is to minimize, either explicitly or implicitly, the (expected) error between the predicted ratings and the true user ratings, while in the evaluation phase, the resulting model is again assessed by that error. In this paper, we argue that defining an error function that is fixed across rating scales is however limited, and different applications may have different recommendation goals thus error functions. For example, in some cases, we might be more concerned about the highly predicted items than the ones with low ratings (precision minded), while in other cases, we want to make sure not to miss any highly rated items (recall minded). Additionally, some applications might require to produce a top-N recommendation list, where the rank-based performance measure becomes valid. To address this issue, we propose a flexible optimization framework that can adapt to individual recommendation goals. We introduce a Directional Error Function to capture the cost (risk) of each individual predictions, and it can be learned from the specified performance measures at hand. Our preliminary experiments on a real data set demonstrate that significant performance gains have been achieved.
    corecore