35 research outputs found

    Ordonnancement hiérarchique pour des missions multi-robots

    Get PDF
    International audienceWe are interested in real case studies involving tasks (here observations on areas of a field) that can be performed by disjunctive resources (robots), as well as operations to prepare these resources (robot movements between observations) which must themselves be decomposed into low-level operations (movements of each robot between different crossing points in the field where each link traversed cannot be occupied by more than one robot at a time). The robots transmit the observation data in real time to the mission centre using a transmission frequency (disjunctive resource). Each observation must be performed by several robots (redundancy) and precedence constraints may be imposed. The problem we consider is to allocate each observation to a robot, to schedule the observations for each one, and to plan the moving tasks in order to minimize the total duration of the mission. The goal is not to obtain an optimal solution, but to obtain good quality solutions quickly. We define a framework for the representation of hierarchical scheduling problems, where tasks are characterized by resource consumption and for which Constraint Programming Models (CMP) can be generated automatically. This framework is based on hierarchical task networks (HTNs), in which high-level tasks, defined by their preconditions and their effects on the state of the system, must be decomposed into primitive tasks by decomposition methods. The framework we consider can be easily generalized to cumulative resources with limited capacity or to non-renewable resources.Nous nous intéressons à des études de cas réelles comportant des tâches (ici des observations sur des zones d’un terrain) qui peuvent être réalisées par des ressources disjonctives (robots), ainsi que des opérations de préparation de ces ressources (mouvements du robot entre observations) qui doivent elles-mêmes être décomposées en opérations de bas niveau (mouvements de chaque robot entre différents points de passage sur le terrain où chaque lien parcouru ne peut pas être occupé par plus d’un robot à la fois). Les robots transmettent les données d’observation en temps réel au centre de mission en utilisant une fréquence d’émission (ressource disjonctive). Chaque observation doit être réalisée par plusieurs robots (redondance) et des contraintes de précédence peuvent être imposées. Le problème que nous considérons consiste à allouer chaque observation à un robot, à ordonnancer les observations pour chacun, et à planifier les tâches de déplacement de manière a minimiser la durée totale de la mission. L’objectif n’est pas l’obtention d’une solution optimale, mais de solutions de bonne qualité rapidement. Nous définissons un cadre de représentation de problèmes d’ordonnancement hiérarchique, où les tâches sont caractérisées par des consommations de ressources et pour lequel des modèles de programmation par contraintes (PPC) peuvent être générés automatiquement. Ce cadre s’inspire des réseaux hiérarchiques de tâches (HTN), dans lesquels des tâches de haut niveau, définies par leurs pré-conditions et leurs effets sur l’état du système, doivent être décomposées en tâches primitives par des méthodes de décomposition. Le cadre que nous considérons peut être facilement généralisé aux ressources cumulatives ayant une capacité limitée ou aux ressources non renouvelables

    Design optimization for bellow soft pneumatic actuators in shape-matching

    Get PDF
    Design optimization of soft actuators is essential for task-oriented applications. Models derived from analytical solutions, the Finite Element Method (FEM), or empirical characterized datasets are widely used to estimate the response of the actuators during actuation, acting as the backbone for design optimization. Faced with the trade-off between speed and accuracy, substantial challenges occur when moving from simulation to optimization due to the compliant, high degree of freedom, and high-dimensional design space of the soft-bodied robot. FEM becomes increasingly computationally expensive with increased design complexity during optimization iterations, while the data-driven modeling approach (e.g., Artificial Neural Network) consumes significant resources prior to optimization. To address the challenge of highly nonlinear and non-convex design optimization in soft robots using the black box modeling, this paper compares of Bayesian optimization (BO) algorithm and genetic algorithm (GA) with FEM and Artificial Neural Network (ANN) models. The shape-matching of a multi-legged robot (a starfish) is demonstrated as an example of a task-oriented design scenario that presents design optimization challenges of the design space scalability. The experimental results show that the bi-level BO outperforms BO with FEM by achieving 2.8 to 9.8 times smaller objective values within a certain time for low-dimensional design problems; GA with the ANN model can achieve lower objective values 3 to 18 times faster in high-dimensional design problems than bi-level BO with FEM in low-dimensional design problems

    Boosting Combinatorial Problem Modeling with Machine Learning

    Full text link
    In the past few years, the area of Machine Learning (ML) has witnessed tremendous advancements, becoming a pervasive technology in a wide range of applications. One area that can significantly benefit from the use of ML is Combinatorial Optimization. The three pillars of constraint satisfaction and optimization problem solving, i.e., modeling, search, and optimization, can exploit ML techniques to boost their accuracy, efficiency and effectiveness. In this survey we focus on the modeling component, whose effectiveness is crucial for solving the problem. The modeling activity has been traditionally shaped by optimization and domain experts, interacting to provide realistic results. Machine Learning techniques can tremendously ease the process, and exploit the available data to either create models or refine expert-designed ones. In this survey we cover approaches that have been recently proposed to enhance the modeling process by learning either single constraints, objective functions, or the whole model. We highlight common themes to multiple approaches and draw connections with related fields of research.Comment: Originally submitted to IJCAI201

    Optimal Systemic Risk Bailout: A PGO Approach Based on Neural Network

    Full text link
    The bailout strategy is crucial to cushion the massive loss caused by systemic risk in the financial system. There is no closed-form formulation of the optimal bailout problem, making solving it difficult. In this paper, we regard the issue of the optimal bailout (capital injection) as a black-box optimization problem, where the black box is characterized as a fixed-point system that follows the E-N framework for measuring the systemic risk of the financial system. We propose the so-called ``Prediction-Gradient-Optimization'' (PGO) framework to solve it, where the ``Prediction'' means that the objective function without a closed-form is approximated and predicted by a neural network, the ``Gradient'' is calculated based on the former approximation, and the ``Optimization'' procedure is further implemented within a gradient projection algorithm to solve the problem. Comprehensive numerical simulations demonstrate that the proposed approach is promising for systemic risk management

    Metamodelling the hot deformation behaviour of titanium alloys using a mean-field approach

    Get PDF
    During the thermomechanical processing of titanium alloys in the β-domain, the β-phase undergoes restoration phenomena. This work describes them by a mean-field physical model that correlates the flow stress with the microstructural evolution. To reduce the computational time of process simulations, metamodels are developed for specific outputs of the mean-field physical model using Artificial Neural Network (ANN) and Decision Tree Regression (DTR). The performance of the obtained metamodels is evaluated in terms of the coefficient of determination (R²), the root-mean-square error (RMSE), and the mean relative error (MRE). No significant difference was observed between R2training and R2testing, meaning that all the metamodels correctly generalise the overall behaviour of the outputs for a wide range of inputs. The evolution of the metamodel outputs is compared with the model predictions in two different situations: 1) at a constant strain rate and temperature, and 2) during Finite Element (FE) simulations of the hot deformation of a hat-shaped sample, where temperature and effective strain rate vary at each element during deformation. The evolution of the outputs at constant and non-constant strain rates and temperature demonstrated the robustness of the metamodels in predicting the heterogeneous deformation within a workpiece. The computational time required by the metamodels to calculate selected outputs can be more than 100 times less than that of the model itself at a constant strain rate using MATLAB® and up to 19% less when coupled with FE simulations. The simulation results combined with microstructural analysis are used to visualise the different restoration mechanisms occurring in different regions of the hat-shaped sample as a function of the local thermomechanical history. The changes in strain rate and temperature during deformation influence the evolution of the wall dislocation density and the immobilisation rate of mobile dislocations at subgrain boundaries, leading to different kinetics of microstructure evolution.Fil: Miller Branco Ferraz, Franz. Graz University Of Technology.; AustriaFil: Sztangret, Lukasz. AGH University of Science and Technology; PoloniaFil: Carazo, Fernando Diego. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Mecanica Aplicada; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Buzolin, Ricardo Henrique. Graz University Of Technology.; AustriaFil: Wang, Peng. Graz University Of Technology.; AustriaFil: Szeliga, Danuta. AGH University of Science and Technology; PoloniaFil: dos Santos Effertz, Pedro. No especifíca;Fil: Macio, Piotr. AGH University of Science and Technology; PoloniaFil: Krumphals, Alfred. No especifíca;Fil: Poletti, Maria Cecilia. Graz University Of Technology.; Austri

    Fast Design Space Exploration of Nonlinear Systems: Part I

    Full text link
    System design tools are often only available as blackboxes with complex nonlinear relationships between inputs and outputs. Blackboxes typically run in the forward direction: for a given design as input they compute an output representing system behavior. Most cannot be run in reverse to produce an input from requirements on output. Thus, finding a design satisfying a requirement is often a trial-and-error process without assurance of optimality. Finding designs concurrently satisfying multiple requirements is harder because designs satisfying individual requirements may conflict with each other. Compounding the hardness are the facts that blackbox evaluations can be expensive and sometimes fail to produce an output due to non-convergence of underlying numerical algorithms. This paper presents CNMA (Constrained optimization with Neural networks, MILP solvers and Active Learning), a new optimization method for blackboxes. It is conservative in the number of blackbox evaluations. Any designs it finds are guaranteed to satisfy all requirements. It is resilient to the failure of blackboxes to compute outputs. It tries to sample only the part of the design space relevant to solving the design problem, leveraging the power of neural networks, MILPs, and a new learning-from-failure feedback loop. The paper also presents parallel CNMA that improves the efficiency and quality of solutions over the sequential version, and tries to steer it away from local optima. CNMA's performance is evaluated for seven nonlinear design problems of 8 (2 problems), 10, 15, 36 and 60 real-valued dimensions and one with 186 binary dimensions. It is shown that CNMA improves the performance of stable, off-the-shelf implementations of Bayesian Optimization and Nelder Mead and Random Search by 1%-87% for a given fixed time and function evaluation budget. Note, that these implementations did not always return solutions.Comment: 14 pages, 26 figures. arXiv admin note: text overlap with arXiv:2010.0984
    corecore