11,246 research outputs found
A Novel Point-based Algorithm for Multi-agent Control Using the Common Information Approach
The Common Information (CI) approach provides a systematic way to transform a
multi-agent stochastic control problem to a single-agent partially observed
Markov decision problem (POMDP) called the coordinator's POMDP. However, such a
POMDP can be hard to solve due to its extraordinarily large action space. We
propose a new algorithm for multi-agent stochastic control problems, called
coordinator's heuristic search value iteration (CHSVI), that combines the CI
approach and point-based POMDP algorithms for large action spaces. We
demonstrate the algorithm through optimally solving several benchmark problems.Comment: 11 pages, 4 figure
Safe Zeroth-Order Optimization Using Quadratic Local Approximations
This paper addresses black-box smooth optimization problems, where the
objective and constraint functions are not explicitly known but can be queried.
The main goal of this work is to generate a sequence of feasible points
converging towards a KKT primal-dual pair. Assuming to have prior knowledge on
the smoothness of the unknown objective and constraints, we propose a novel
zeroth-order method that iteratively computes quadratic approximations of the
constraint functions, constructs local feasible sets and optimizes over them.
Under some mild assumptions, we prove that this method returns an -KKT
pair (a property reflecting how close a primal-dual pair is to the exact KKT
condition) within iterations. Moreover, we numerically show
that our method can achieve faster convergence compared with some
state-of-the-art zeroth-order approaches. The effectiveness of the proposed
approach is also illustrated by applying it to nonconvex optimization problems
in optimal control and power system operation.Comment: arXiv admin note: text overlap with arXiv:2211.0264
Variational Quantum Time Evolution without the Quantum Geometric Tensor
The real- and imaginary-time evolution of quantum states are powerful tools
in physics and chemistry to investigate quantum dynamics, prepare ground states
or calculate thermodynamic observables. They also find applications in wider
fields such as quantum machine learning or optimization. On near-term devices,
variational quantum time evolution is a promising candidate for these tasks, as
the required circuit model can be tailored to trade off available device
capabilities and approximation accuracy. However, even if the circuits can be
reliably executed, variational quantum time evolution algorithms quickly become
infeasible for relevant system sizes. They require the calculation of the
Quantum Geometric Tensor and its complexity scales quadratically with the
number of parameters in the circuit. In this work, we propose a solution to
this scaling problem by leveraging a dual formulation that circumvents the
explicit evaluation of the Quantum Geometric Tensor. We demonstrate our
algorithm for the time evolution of the Heisenberg Hamiltonian and show that it
accurately reproduces the system dynamics at a fraction of the cost of standard
variational quantum time evolution algorithms. As an application, we calculate
thermodynamic observables with the QMETTS algorithm
Convergence Rate of Nonconvex Douglas-Rachford splitting via merit functions, with applications to weakly convex constrained optimization
We analyze Douglas-Rachford splitting techniques applied to solving weakly
convex optimization problems. Under mild regularity assumptions, and by the
token of a suitable merit function, we show convergence to critical points and
local linear rates of convergence. The merit function, comparable to the Moreau
envelope in Variational Analysis, generates a descent sequence, a feature that
allows us to extend to the non-convex setting arguments employed in convex
optimization. A by-product of our approach is a ADMM-like method for
constrained problems with weakly convex objective functions. When specialized
to multistage stochastic programming, the proposal yields a nonconvex version
of the Progressive Hedging algorithm that converges with linear speed. The
numerical assessment on a battery of phase retrieval problems shows promising
numerical performance of our method, when compared to existing algorithms in
the literature.Comment: 24 pages, 1 figur
Fair Assortment Planning
Many online platforms, ranging from online retail stores to social media
platforms, employ algorithms to optimize their offered assortment of items
(e.g., products and contents). These algorithms tend to prioritize the
platforms' short-term goals by solely featuring items with the highest
popularity or revenue. However, this practice can then lead to undesirable
outcomes for the rest of the items, making them leave the platform, and in turn
hurting the platform's long-term goals. Motivated by that, we introduce and
study a fair assortment planning problem, which requires any two items with
similar quality/merits to be offered similar outcomes. We show that the problem
can be formulated as a linear program (LP), called (FAIR), that optimizes over
the distribution of all feasible assortments. To find a near-optimal solution
to (FAIR), we propose a framework based on the Ellipsoid method, which requires
a polynomial-time separation oracle to the dual of the LP. We show that finding
an optimal separation oracle to the dual problem is an NP-complete problem, and
hence we propose a series of approximate separation oracles, which then result
in a -approx. algorithm and a PTAS for the original Problem (FAIR). The
approximate separation oracles are designed by (i) showing the separation
oracle to the dual of the LP is equivalent to solving an infinite series of
parameterized knapsack problems, and (ii) taking advantage of the structure of
the parameterized knapsack problems. Finally, we conduct a case study using the
MovieLens dataset, which demonstrates the efficacy of our algorithms and
further sheds light on the price of fairness.Comment: 86 pages, 7 figure
A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms
Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data.
A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability.
To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity.
A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case.
The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change.
The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the âproblem of implementationâ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sectorâs emergence
Demand fulfillment in customer hierarchies with stochastic demand
Supply scarcity, due to demand or supply fluctuations, is a common issue in make-to-stock production systems. To increase profits when customers are heterogeneous, firms need to decide whether to accept a customer order or reject it in anticipation of more profitable orders, and if accepted, which supplies to use in order to fulfill the order. Such issues are addressed by solving demand fulfillment problems. In order to provide a solution, firms commonly divide their customers into different segments, based on their respective profitability. The available supply is first allocated to the customer segments based on their projected demand information. Then, as customer orders materialize, the allocated quotas are consumed. The customer segments commonly have a multilevel hierarchical structure, which reflects the structure of the sales organization. In this thesis, we study the demand fulfillment problem in make-to-stock production systems, considering such customer hierarchies with stochastic demand.
In the hierarchical setting, the available supply is allocated level by level from top to bottom of the hierarchy by multiple planners on different levels. The planners on higher levels of the hierarchy need to make their allocation decisions based on aggregated information, since transmitting all detailed demand information from the bottom to the top of the hierarchy is not generally feasible. In practice, simplistic rules of thumb are applied to deal with this decentralized problem, which lead to sub-optimal results. We aim to provide more effective approaches that result in near-optimal solutions to this decentralized problem.
We first consider the single-period problem with a single supply replenishment and focus on identifying critical information for good, decentralized allocation decisions. We propose two decentralized allocation methods, namely a stochastic Theil index approximation and a clustering approach, which provide near-optimal results even for large, complicated hierarchies. Both methods transmit aggregated information about profit heterogeneity and demand uncertainty in the hierarchy, which is missing in the current simplistic rules.
Subsequently, we expand our analysis to a multi-period setting, in which periodic supply replenishments are considered and periods are interconnected by inventory or backlog. We consider a periodic setting, meaning that in each period we allow multiple orders from multiple customer segments. We first formalize the centralized problem as a two-stage stochastic dynamic program. Due to the curse of dimensionality, the problem is computationally intractable. Therefore, we propose an approximate dynamic programming heuristic. For the decentralized case, we consider our proposed clustering method and modify it to fit the multi-period setting, relying on the approximate dynamic programming heuristic. Our results show that the proposed heuristics lead to profits very close to the ex-post optimal solution for both centralized and decentralized problems.
Finally, we look into the order promising stage and compare different consumption functions, namely partitioned, rule-based nested, and bid price methods. Our results show that nesting leads to performance improvements compared to partitioned consumption.
However, for decentralized problems, the improvement resulting from nesting cannot mitigate the profit loss from considerable mis-allocations made by simplistic rules, except for cases with high demand uncertainty or low profit heterogeneity. Moreover, among the nested consumption functions, the bid price approach, which integrates the allocation and consumption stages, leads to a higher performance than the rule-based consumption methods.
Altogether, our proposed decentralized methods lead to drastic profit improvements compared to the current simplistic rules for demand fulfillment in customer hierarchies, except for cases with very low shortage or for largely homogeneous customers, where simplistic rules perform similarly well. Applying our advanced methods is especially important when the shortage rate is high or customers are more heterogeneous. Regarding order promising, nesting is more crucial when demand uncertainty is high.
The research presented in this thesis was undertaken as part of the project âdemand fulfillment in customer hierarchiesâ. It was funded by the German Research Foundation (DFG) under grant FL738/2-1
Recommended from our members
Efficient Neural Network Verification Using Branch and Bound
Neural networks have demonstrated great success in modern machine learning systems. However, they remain susceptible to incorrect corner-case behaviors, often behaving unpredictably and producing surprisingly wrong results. Therefore, it is desirable to formally guarantee their trustworthiness for certain robustness properties when applied to safety-/security-sensitive systems like autonomous vehicles and aircraft. Unfortunately, the task is extremely challenging due to the complexity of neural networks, and traditional formal methods were not efficient enough to verify practical properties. Recently, a Branch and Bound (BaB) framework is generally extended for neural network verification and shows great success in accelerating the verification.
This dissertation focuses on state-of-the-art neural network verifiers using BaB. We will first introduce two efficient neural network verifiers ReluVal and Neurify using basic BaB approaches involving two main steps: (1) They will recursively split the original verification problem into easier independent subproblems by splitting input or hidden neurons; (2) For each split subproblem, we propose an efficient and tight bound propagation method called symbolic interval analysis, producing sound estimated bounds for outputs using convex linear relaxations. Both ReluVal and Neurify are three orders of magnitude faster than previously state-of-the-art formal analysis systems on standard verification benchmarks.
However, basic BaB approaches like Neurify have to construct each subproblem into a Linear Programming (LP) problem and solve it using expensive LP solvers, significantly limiting the overall efficiency. This is because each step of BaB will introduce neuron split constraints (e.g., a ReLU neuron larger or smaller than 0), which are hard to be handled by existing efficient bound propagation methods. We propose novel designs of bound propagation method -CROWN and its improved variance -CROWN, solving the verification problem by optimizing Lagrangian multipliers and with gradient ascent without requiring to call any expensive LP solvers. They were built based on previous work CROWN, a generalized efficient bound propagation method using linear relaxation. BaB verification using -CROWN and -CROWN cannot only provide tighter output estimations than most of the bound propagation methods but also can fully leverage the accelerations by GPUs with massive parallelization.
Combining our methods with BaB empowers the state-of-the-art verifier ,-CROWN (alpha-beta-CROWN), the winning tool in the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. Our $\alpha,-CROWN can be three orders of magnitude faster than LP solver based BaB verifiers and is notably faster than all existing approaches on GPUs. Recently, we further generalize -CROWN and propose an efficient iterative approach that can tighten all intermediate layer bounds under neuron split constraints and strengthen the bound tightness without LP solvers. This new approach in BaB can greatly improve the efficiency of ,-CROWN, especially on several challenging benchmarks.
Lastly, we study verifiable training that incorporates verification properties in training procedures to enhance the verifiable robustness of trained models and scale verification to larger models and datasets. We propose two general verifiable training frameworks: (1) MixTrain that can significantly improve verifiable training efficiency and scalability and (2) adaptive verifiable training that can improve trained verifiable robustness accounting for label similarity. The combination of verifiable training and BaB based verifiers opens promising directions for more efficient and scalable neural network verification
- âŠ