4,339 research outputs found
Trading Safety Versus Performance: Rapid Deployment of Robotic Swarms with Robust Performance Constraints
In this paper we consider a stochastic deployment problem, where a robotic
swarm is tasked with the objective of positioning at least one robot at each of
a set of pre-assigned targets while meeting a temporal deadline. Travel times
and failure rates are stochastic but related, inasmuch as failure rates
increase with speed. To maximize chances of success while meeting the deadline,
a control strategy has therefore to balance safety and performance. Our
approach is to cast the problem within the theory of constrained Markov
Decision Processes, whereby we seek to compute policies that maximize the
probability of successful deployment while ensuring that the expected duration
of the task is bounded by a given deadline. To account for uncertainties in the
problem parameters, we consider a robust formulation and we propose efficient
solution algorithms, which are of independent interest. Numerical experiments
confirming our theoretical results are presented and discussed
The role of learning on industrial simulation design and analysis
The capability of modeling real-world system operations has turned simulation into an indispensable problemsolving methodology for business system design and analysis. Today, simulation supports decisions ranging
from sourcing to operations to finance, starting at the strategic level and proceeding towards tactical and
operational levels of decision-making. In such a dynamic setting, the practice of simulation goes beyond
being a static problem-solving exercise and requires integration with learning. This article discusses the role
of learning in simulation design and analysis motivated by the needs of industrial problems and describes
how selected tools of statistical learning can be utilized for this purpose
Investment and the Dynamic Cost of Income Uncertainty: the Case of Diminishing Expectations in Agriculture
This paper studies optimal investment and the dynamic cost of income uncertainty, applying a stochastic programming approach. The motivation is given by a case study in Finnish agriculture. Investment decision is modelled as a Markov decision process, extended to account for risk. A numerical framework for studying the dynamic uncertainty cost is presented, modifying the classical expected value of perfect information to a dynamic setting. The uncertainty cost depends on the volatility of income; e.g. with stationary income, the dynamic uncertainty cost corresponds to a dynamic option value of postponing investment. The numerical investment model also yields the optimal investment behavior of a representative farm. The model can be applied e.g. in planning investment subsidies for maintaining target investments. In the case study, the investment decision is sensitive to risk.Financial Economics,
Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability
A trustworthy reinforcement learning algorithm should be competent in solving
challenging real-world problems, including {robustly} handling uncertainties,
satisfying {safety} constraints to avoid catastrophic failures, and
{generalizing} to unseen scenarios during deployments. This study aims to
overview these main perspectives of trustworthy reinforcement learning
considering its intrinsic vulnerabilities on robustness, safety, and
generalizability. In particular, we give rigorous formulations, categorize
corresponding methodologies, and discuss benchmarks for each perspective.
Moreover, we provide an outlook section to spur promising future directions
with a brief discussion on extrinsic vulnerabilities considering human
feedback. We hope this survey could bring together separate threads of studies
together in a unified framework and promote the trustworthiness of
reinforcement learning.Comment: 36 pages, 5 figure
Planning under risk and uncertainty
This thesis concentrates on the optimization of large-scale management policies under conditions of risk and uncertainty. In paper I, we address the problem of solving large-scale spatial and temporal natural resource management problems. To model these types of problems, the framework of graph-based Markov decision processes (GMDPs) can be used. Two algorithms for computation of high-quality management policies are presented: the first is based on approximate linear programming (ALP) and the second is based on mean-field approximation and approximate policy iteration (MF-API). The applicability and efficiency of the algorithms were demonstrated by their ability to compute near-optimal management policies for two large-scale management problems. It was concluded that the two algorithms compute policies of similar quality. However, the MF-API algorithm should be used when both the policy and the expected value of the computed policy are required, while the ALP algorithm may be preferred when only the policy is required. In paper II, a number of reinforcement learning algorithms are presented that can be used to compute management policies for GMDPs when the transition function can only be simulated because its explicit formulation is unknown. Studies of the efficiency of the algorithms for three management problems led us to conclude that some of these algorithms were able to compute near-optimal management policies. In paper III, we used the GMDP framework to optimize long-term forestry management policies under stochastic wind-damage events. The model was demonstrated by a case study of an estate consisting of 1,200 ha of forest land, divided into 623 stands. We concluded that managing the estate according to the risk of wind damage increased the expected net present value (NPV) of the whole estate only slightly, less than 2%, under different wind-risk assumptions. Most of the stands were managed in the same manner as when the risk of wind damage was not considered. However, the analysis rests on properties of the model that need to be refined before definite conclusions can be drawn
- âŠ