60,341 research outputs found
Active network management for electrical distribution systems: problem formulation, benchmark, and approximate solution
With the increasing share of renewable and distributed generation in
electrical distribution systems, Active Network Management (ANM) becomes a
valuable option for a distribution system operator to operate his system in a
secure and cost-effective way without relying solely on network reinforcement.
ANM strategies are short-term policies that control the power injected by
generators and/or taken off by loads in order to avoid congestion or voltage
issues. Advanced ANM strategies imply that the system operator has to solve
large-scale optimal sequential decision-making problems under uncertainty. For
example, decisions taken at a given moment constrain the future decisions that
can be taken and uncertainty must be explicitly accounted for because neither
demand nor generation can be accurately forecasted. We first formulate the ANM
problem, which in addition to be sequential and uncertain, has a nonlinear
nature stemming from the power flow equations and a discrete nature arising
from the activation of power modulation signals. This ANM problem is then cast
as a stochastic mixed-integer nonlinear program, as well as second-order cone
and linear counterparts, for which we provide quantitative results using state
of the art solvers and perform a sensitivity analysis over the size of the
system, the amount of available flexibility, and the number of scenarios
considered in the deterministic equivalent of the stochastic program. To foster
further research on this problem, we make available at
http://www.montefiore.ulg.ac.be/~anm/ three test beds based on distribution
networks of 5, 33, and 77 buses. These test beds contain a simulator of the
distribution system, with stochastic models for the generation and consumption
devices, and callbacks to implement and test various ANM strategies
Recommended from our members
Software tools for stochastic programming: A Stochastic Programming Integrated Environment (SPInE)
SP models combine the paradigm of dynamic linear programming with
modelling of random parameters, providing optimal decisions which hedge
against future uncertainties. Advances in hardware as well as software
techniques and solution methods have made SP a viable optimisation tool.
We identify a growing need for modelling systems which support the creation
and investigation of SP problems. Our SPInE system integrates a number of
components which include a flexible modelling tool (based on stochastic
extensions of the algebraic modelling languages AMPL and MPL), stochastic
solvers, as well as special purpose scenario generators and database tools.
We introduce an asset/liability management model and illustrate how SPInE
can be used to create and process this model as a multistage SP application
Planning for Decentralized Control of Multiple Robots Under Uncertainty
We describe a probabilistic framework for synthesizing control policies for
general multi-robot systems, given environment and sensor models and a cost
function. Decentralized, partially observable Markov decision processes
(Dec-POMDPs) are a general model of decision processes where a team of agents
must cooperate to optimize some objective (specified by a shared reward or cost
function) in the presence of uncertainty, but where communication limitations
mean that the agents cannot share their state, so execution must proceed in a
decentralized fashion. While Dec-POMDPs are typically intractable to solve for
real-world problems, recent research on the use of macro-actions in Dec-POMDPs
has significantly increased the size of problem that can be practically solved
as a Dec-POMDP. We describe this general model, and show how, in contrast to
most existing methods that are specialized to a particular problem class, it
can synthesize control policies that use whatever opportunities for
coordination are present in the problem, while balancing off uncertainty in
outcomes, sensor information, and information about other agents. We use three
variations on a warehouse task to show that a single planner of this type can
generate cooperative behavior using task allocation, direct communication, and
signaling, as appropriate
Risk-sensitive Inverse Reinforcement Learning via Semi- and Non-Parametric Methods
The literature on Inverse Reinforcement Learning (IRL) typically assumes that
humans take actions in order to minimize the expected value of a cost function,
i.e., that humans are risk neutral. Yet, in practice, humans are often far from
being risk neutral. To fill this gap, the objective of this paper is to devise
a framework for risk-sensitive IRL in order to explicitly account for a human's
risk sensitivity. To this end, we propose a flexible class of models based on
coherent risk measures, which allow us to capture an entire spectrum of risk
preferences from risk-neutral to worst-case. We propose efficient
non-parametric algorithms based on linear programming and semi-parametric
algorithms based on maximum likelihood for inferring a human's underlying risk
measure and cost function for a rich class of static and dynamic
decision-making settings. The resulting approach is demonstrated on a simulated
driving game with ten human participants. Our method is able to infer and mimic
a wide range of qualitatively different driving styles from highly risk-averse
to risk-neutral in a data-efficient manner. Moreover, comparisons of the
Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL
framework more accurately captures observed participant behavior both
qualitatively and quantitatively, especially in scenarios where catastrophic
outcomes such as collisions can occur.Comment: Submitted to International Journal of Robotics Research; Revision 1:
(i) Clarified minor technical points; (ii) Revised proof for Theorem 3 to
hold under weaker assumptions; (iii) Added additional figures and expanded
discussions to improve readabilit
Integrating multicriteria decision analysis and scenario planning : review and extension
Scenario planning and multiple criteria decision analysis (MCDA) are two key management science tools used in strategic planning. In this paper, we explore the integration of these two approaches in a coherent manner, recognizing that each adds value to the implementation of the other. Various approaches that have been adopted for such integration are reviewed, with a primary focus on the process of constructing preferences both within and between scenarios. Biases that may be introduced by inappropriate assumptions during such processes are identified, and used to motivate a framework for integrating MCDA and scenario thinking, based on applying MCDA concepts across a range of "metacriteria" (combinations of scenarios and primary criteria). Within this framework, preferences according to each primary criterion can be expressed in the context of different scenarios. The paper concludes with a hypothetical but non-trivial example of agricultural policy planning in a developing country
- …