2,782 research outputs found

    The Kalai-Smorodinski solution for many-objective Bayesian optimization

    Get PDF
    An ongoing aim of research in multiobjective Bayesian optimization is to extend its applicability to a large number of objectives. While coping with a limited budget of evaluations, recovering the set of optimal compromise solutions generally requires numerous observations and is less interpretable since this set tends to grow larger with the number of objectives. We thus propose to focus on a specific solution originating from game theory, the Kalai-Smorodinsky solution, which possesses attractive properties. In particular, it ensures equal marginal gains over all objectives. We further make it insensitive to a monotonic transformation of the objectives by considering the objectives in the copula space. A novel tailored algorithm is proposed to search for the solution, in the form of a Bayesian optimization algorithm: sequential sampling decisions are made based on acquisition functions that derive from an instrumental Gaussian process prior. Our approach is tested on four problems with respectively four, six, eight, and nine objectives. The method is available in the Rpackage GPGame available on CRAN at https://cran.r-project.org/package=GPGame

    A game theoretic perspective on Bayesian multi-objective optimization

    Get PDF
    This chapter addresses the question of how to efficiently solve many-objective optimization problems in a computationally demanding black-box simulation context. We shall motivate the question by applications in machine learning and engineering, and discuss specific harsh challenges in using classical Pareto approaches when the number of objectives is four or more. Then, we review solutions combining approaches from Bayesian optimization, e.g., with Gaussian processes, and concepts from game theory like Nash equilibria, Kalai-Smorodinsky solutions and detail extensions like Nash-Kalai-Smorodinsky solutions. We finally introduce the corresponding algorithms and provide some illustrating results

    Incorporating Human Preferences in Decision Making for Dynamic Multi-Objective Optimization in Model Predictive Control

    Get PDF
    We present a new two-step approach for automatized a posteriori decision making in multi-objective optimization problems, i.e., selecting a solution from the Pareto front. In the first step, a knee region is determined based on the normalized Euclidean distance from a hyperplane defined by the furthest Pareto solution and the negative unit vector. The size of the knee region depends on the Pareto front’s shape and a design parameter. In the second step, preferences for all objectives formulated by the decision maker, e.g., 50–20–30 for a 3D problem, are translated into a hyperplane which is then used to choose a final solution from the knee region. This way, the decision maker’s preference can be incorporated, while its influence depends on the Pareto front’s shape and a design parameter, at the same time favorizing knee points if they exist. The proposed approach is applied in simulation for the multi-objective model predictive control (MPC) of the two-dimensional rocket car example and the energy management system of a building

    The Kalai-Smorodinski solution for many-objective Bayesian optimization

    Get PDF
    International audienceAn ongoing aim of research in multiobjective Bayesian optimization is to extend its applicability to a large number of objectives. While coping with a limited budget of evaluations, recovering the set of optimal compromise solutions generally requires numerous observations and is less interpretable since this set tends to grow larger with the number of objectives. We thus propose to focus on a specific solution originating from game theory, the Kalai-Smorodinsky solution, which possesses attractive properties. In particular, it ensures equal marginal gains over all objectives. We further make it insensitive to a monotonic transformation of the objectives by considering the objectives in the copula space. A novel tailored algorithm is proposed to search for the solution, in the form of a Bayesian optimization algorithm: sequential sampling decisions are made based on acquisition functions that derive from an instrumental Gaussian process prior. Our approach is tested on four problems with respectively four, six, eight, and nine objectives. The method is available in the Rpackage GPGame available on CRAN at https://cran.r-project.org/package=GPGame

    High-Throughput System for the Early Quantification of Major Architectural Traits in Olive Breeding Trials Using UAV Images and OBIA Techniques

    Get PDF
    The need for the olive farm modernization have encouraged the research of more efficient crop management strategies through cross-breeding programs to release new olive cultivars more suitable for mechanization and use in intensive orchards, with high quality production and resistance to biotic and abiotic stresses. The advancement of breeding programs are hampered by the lack of efficient phenotyping methods to quickly and accurately acquire crop traits such as morphological attributes (tree vigor and vegetative growth habits), which are key to identify desirable genotypes as early as possible. In this context, an UAV-based high-throughput system for olive breeding program applications was developed to extract tree traits in large-scale phenotyping studies under field conditions. The system consisted of UAV-flight configurations, in terms of flight altitude and image overlaps, and a novel, automatic, and accurate object-based image analysis (OBIA) algorithm based on point clouds, which was evaluated in two experimental trials in the framework of a table olive breeding program, with the aim to determine the earliest date for suitable quantifying of tree architectural traits. Two training systems (intensive and hedgerow) were evaluated at two very early stages of tree growth: 15 and 27 months after planting. Digital Terrain Models (DTMs) were automatically and accurately generated by the algorithm as well as every olive tree identified, independently of the training system and tree age. The architectural traits, specially tree height and crown area, were estimated with high accuracy in the second flight campaign, i.e. 27 months after planting. Differences in the quality of 3D crown reconstruction were found for the growth patterns derived from each training system. These key phenotyping traits could be used in several olive breeding programs, as well as to address some agronomical goals. In addition, this system is cost and time optimized, so that requested architectural traits could be provided in the same day as UAV flights. This high-throughput system may solve the actual bottleneck of plant phenotyping of "linking genotype and phenotype," considered a major challenge for crop research in the 21st century, and bring forward the crucial time of decision making for breeders

    Several approaches for the traveling salesman problem

    Get PDF
    We characterize both approaches, mldp and k-mldp, with several methodologies; both a linear and a non-linear mathematical formulation are proposed. Additionally, the design and implementation of an exact methodology to solve both linear formulations is implemented and with it we obtained exact results. Due to the large computation time these formulations take to be solved with the exact methodology proposed, we analyse the complexity each of these approaches and show that both problems are NP-hard. As both problems are NP-hard, we propose three metaheuristic methods to obtain solutions in shorter computation time. Our solution methods are population based metaheuristics which exploit the structure of both problems and give good quality solutions by introducing novel local search procedures which are able to explore more efficiently their search space and to obtain good quality solutions in shorter computation time. Our main contribution is the study and characterization of a bi-objective problematic involving the minimization of two objectives: an economic one which aims to minimize the total travel distance, and a service-quality objective which aims to minimize of the waiting time of the clients to be visited. With this combination of objectives, we aim to characterize the inclusion of the client in the decision-making process to introduce service-quality decisions alongside a classic routing objective.This doctoral dissertation studies and characterizes of a combination of objectives with several logistic applications. This combination aims to pursue not only a company benefit but a benefit to the clients waiting to obtain a service or a product. In classic routing theory, an economic approach is widely studied: the minimization of traveled distance and cost spent to perform the visiting is an economic objective. This dissertation aims to the inclusion of the client in the decision-making process to bring out a certain level of satisfaction in the client set when performing an action. We part from having a set of clients demanding a service to a certain company. Several assumptions are made: when visiting a client, an agent must leave from a known depot and come back to it at the end of the tour assigned to it. All travel times among the clients and the depot are known, as well as all service times on each client. This is to say, the agent knows how long it will take to reach a client and to perform the requested service in the client location. The company is interested in improving two characteristics: an economic objective as well as a servicequality objective by minimizing the total travel distance of the agent while also minimizing the total waiting time of the clients. We study two main approaches: the first one is to fulfill the visits assuming there is a single uncapacitated vehicle, this is to say that such vehicle has infinite capacity to attend all clients. The second one is to fulfill the visits with a fleet of k-uncapacitated vehicles, all of them restricted to an strict constraint of being active and having at least one client to visit. We denominate the single-vehicle approach the minimum latency-distance problem (mldp), and the k-sized fleet the k-minimum latency-distance problem (k-mldp). As previously stated, this company has two options: to fulfil the visits with a single-vehicle or with a fixed-size fleet of k agents to perform the visits

    Matrix and Tensor-based ESPRIT Algorithm for Joint Angle and Delay Estimation in 2D Active Broadband Massive MIMO Systems and Analysis of Direction of Arrival Estimation Algorithms for Basal Ice Sheet Tomography

    Get PDF
    In this thesis, we apply and analyze three direction of arrival algorithms (DoA) to tackle two distinct problems: one belongs to wireless communication, the other to radar signal processing. Though the essence of these two problems is DoA estimation, their formulation, underlying assumptions, application scenario, etc. are totally different. Hence, we write them separately, with ESPRIT algorithm the focus of Part I and MUSIC and MLE detailed in Part II. For wireless communication scenario, mobile data traffic is expected to have an exponential growth in the future. In order to meet the challenge as well as the form factor limitation on the base station, 2D "massive MIMO" has been proposed as one of the enabling technologies to significantly increase the spectral efficiency of a wireless system. In "massive MIMO" systems, a base station will rely on the uplink sounding signals from mobile stations to figure out the spatial information to perform MIMO beamforming. Accordingly, multi-dimensional parameter estimation of a ray-based multi-path wireless channel becomes crucial for such systems to realize the predicted capacity gains. In the first Part, we study joint angle and delay estimation for 2D "massive MIMO" systems in mobile wireless communications. To be specific, we first introduce a low complexity time delay and 2D DoA estimation algorithm based on unitary transformation. Some closed-form results and capacity analysis are involved. Furthermore, the matrix and tensor-based 3D ESPRIT-like algorithms are applied to jointly estimate angles and delay. Significant improvements of the performance can be observed in our communication scheme. Finally, we found that azimuth estimation is more vulnerable compared to elevation estimation. Results suggest that the dimension of the antenna array at the base station plays an important role in determining the estimation performance. These insights will be useful for designing practical "massive MIMO" systems in future mobile wireless communications. For the problem of radar remote sensing of ice sheet topography, one of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enables accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of DoA algorithms such as MUSIC and MLE techniques. These methods have enabled fine resolution in the cross-track dimension using synthetic aperture radar (SAR) images obtained from single pass multichannel data. In Part II, we analyze and compare the results obtained from the spectral MUSIC algorithm and an alternating projection (AP) based MLE technique. While the MUSIC algorithm is more attractive computationally compared to MLE, the performance of the latter is known to be superior in most situations. The SAR focused datasets provide a good case study to explore the performance of these two techniques to the application of ice sheet bed elevation estimation. For the antenna array geometry and sample support used in our tomographic application, MUSIC performs better originally using a cross-over analysis where the estimated topography from crossing flightlines are compared for consistency. However, after several improvements applied to MLE, i.e., replacing ideal steering vector generation with measured steering vectors, automatic determination of the number of scatter sources, smoothing the 3D tomography in order to get a more accurate height estimation and introducing a quality metric for the estimated signals, etc., MLE outperforms MUSIC. It confirms that MLE is indeed the optimal estimator for our particular ice bed tomographic application. We observe that, the spatial bottom smoothing, aiming to remove the artifacts made by MLE algorithm, is the most essential step in the post-processing procedure. The 3D tomography we obtained lays a good foundation for further analysis and modeling of ice sheets
    • …
    corecore