2 research outputs found

    Scalarized multi-objective reinforcement learning: Novel design techniques (abstract)

    No full text
    In multi-objective problems, it is key to find compromising solutions that balance different objectives. The linear scalarization function is often utilized to translate the multi-objective nature of a problem into a standard, single-objective problem. Generally, it is noted that such as linear combination can only find solutions in convex areas of the Pareto front, therefore making the method inapplicable in situations where the shape of the front is not known beforehand. We propose a non-linear scalarization function, called the Chebyshev scalarization function in multi-objective reinforcement learning. We show that the Chebyshev scalarization method overcomes the flaws of the linear scalarization function and is able to discover all Pareto optimal solutions in non-convex environments

    Solving Satisfiability in Fuzzy Logics by Mixing CMA-ES (abstract)

    No full text
    Satisfiability in propositional logic is well researched and many approaches to checking and solving exist. In infinite-valued or fuzzy logics, however, there have only recently been attempts at developing methods for solving satisfiability. In this paper, we analyse the function landscape of different problem classes, focussing our analysis on plateaus. Based on this study, we develop Mixing CMA-ES (M-CMAES), an extension to CMA-ES that is well suited to solving problems with many large plateaus. We empirically show the relation between certain function landscape properties and M-CMA-ES performance
    corecore