875 research outputs found

    Water Resources Decision Making Under Uncertainty

    Get PDF
    Uncertainty is in part about variability in relation to the physical characteristics of water resources systems. But uncertainty is also about ambiguity (Simonovic, 2009). Both variability and ambiguity are associated with a lack of clarity because of the behaviour of all system components, a lack of data, a lack of detail, a lack of structure to consider water resources management problems, working and framing assumptions being used to consider the problems, known and unknown sources of bias, and ignorance about how much effort it is worth expending to clarify the management situation. Climate change, addressed in this research project (CFCAS, 2008), is another important source of uncertainty that contributes to the variability in the input variables for water resources management. This report presents a set of examples that illustrate (a) probabilistic and (b) fuzzy set approaches for solving various water resources management problems. The main goal of this report is to demonstrate how information provided to water resources decision makers can be improved by using the tools that incorporate risk and uncertainty. The uncertainty associated with water resources decision making problems is quantified using probabilistic and fuzzy set approaches. A set of selected examples are presented to illustrate the application of probabilistic and fuzzy simulation, optimization, and multi-objective analysis to water resources design, planning and operations. Selected examples include dike design, sewer pipe design, optimal operations of a single purpose reservoir, and planning of a multi-purpose reservoir system. Demonstrated probabilistic and fuzzy tools can be easily adapted to many other water resources decision making problems.https://ir.lib.uwo.ca/wrrr/1035/thumbnail.jp

    The Human Factor in Transmission Network Expansion Planning: The Grid That a Sustainable Energy System Needs

    Get PDF
    The decarbonization of the energy sector puts additional pressure on the transmission network. The main cause for this is that renewable sources are often more abundant in geographical areas far away from the main demand centers, so new transmission lines are required to connect the new renewable energy capacity. In addition, by connecting different geographical zones, the transmission network could smooth the intermittency and the variability of renewable energy production. Thus, the changing energy landscape leads to a need to reinforce the transmission network through the Network Transmission Expansion Planning. Ideally, all the idiosyncrasies of the electricity system are considered in the operation and expansion planning process. However, several critical dimensions of the planning process are routinely ignored since they may introduce parameters that are difficult to quantify and complexity that state-of-the-art planning methods cannot handle. This paper identifies the most relevant elements related to the human factor, which have been grouped around the main topics: the human behind the technical, the human at the institutional level, and the human at the individual level. This paper also provides an additional formulation that can be used to upgrade existing models to include the human element and discusses the implications of these upgrades. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.Funding: This research has been carried out thanks to the Spanish Ministry of Economy and Competitiveness MINECO through BC3 María de Maeztu excellence accreditation MDM-2017-0714 Maria de Maeztu Grant, and through the funding of openENTRANCE project (Open ENergy TRansition ANalyses for a low-carbon Economy) that belongs to LC-SC3-CC-2-2018—Modelling in support to the transition to a Low-Carbon Energy System in Europe

    Nonparametric geostatistical risk mapping

    Full text link
    In this work, a fully nonparametric geostatistical approach to estimate threshold exceeding probabilities is proposed. To estimate the large-scale variability (spatial trend) of the process, the nonparametric local linear regression estimator, with the bandwidth selected by a method that takes the spatial dependence into account, is used. A bias-corrected nonparametric estimator of the variogram, obtained from the nonparametric residuals, is proposed to estimate the small-scale variability. Finally, a bootstrap algorithm is designed to estimate the unconditional probabilities of exceeding a threshold value at any location. The behavior of this approach is evaluated through simulation and with an application to a real data set.Comment: 17 pages, 4 figure

    A disposition of interpolation techniques

    Get PDF
    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated values, 3) incorporation of ancillary information, and 4) incorporation of process knowledge. The described methods include inverse distance weighting, nearest neighbour methods, geostatistical interpolation methods, Kalman filter methods, Bayesian Maximum Entropy methods, etc. The applicability of methods in aggregation (upscaling) and disaggregation (downscaling) is discussed. Software for interpolation is described. The application of interpolation techniques is illustrated in two case studies: temporal interpolation of indicators for ecological water quality, and spatio-temporal interpolation and aggregation of pesticide concentrations in Dutch surface waters. A valuable next step will be to construct a decision tree or decision support system, that guides the environmental scientist to easy-to-use software implementations that are appropriate to solve their interpolation problem. Validation studies are needed to assess the quality of interpolated values, and the quality of information on uncertainty provided by the interpolation method

    Institutional paraconsciousness and its pathologies

    Get PDF
    This analysis extends a recent mathematical treatment of the Baars consciousness model to analogous, but far more complicated, phenomena of institutional cognition. Individual consciousness is limited to a single, tunable, giant component of interacting cognitive modules, instantiating a Global Workspace. Human institutions, by contrast, support several, sometimes many, such giant components simultaneously, although their behavior remains constrained to a topology generated by cultural context and by the path-dependence inherent to organizational history. Such highly parallel multitasking - institutional paraconsciousness - while clearly limiting inattentional blindness and the consequences of failures within individual workspaces, does not eliminate them, and introduces new characteristic dysfunctions involving the distortion of information sent between global workspaces. Consequently, organizations (or machines designed along these principles), while highly efficient at certain kinds of tasks, remain subject to canonical and idiosyncratic failure patterns similar to, but more complicated than, those afflicting individuals. Remediation is complicated by the manner in which pathogenic externalities can write images of themselves on both institutional function and therapeutic intervention, in the context of relentless market selection pressures. The approach is broadly consonant with recent work on collective efficacy, collective consciousness, and distributed cognition

    A Mean-Risk Mixed Integer Nonlinear Program for Network Protection

    Get PDF
    Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network\u27s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network\u27s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society\u27s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, we enforce assumptions that specify characteristics key to this type of decision model. The second stage objective---which represents the price of the network\u27s routine functionality---is nonlinear, as it reflects the increasing marginal cost per unit of additional flow across an arc. After the model has been designed properly to reflect the network protection problem, we are left with a nonconvex, nonlinear, nonseparable risk-neutral program. This research focuses on key reformulation techniques that transform the problematic model into one that is convex, separable, and much more solvable. Our approach focuses on using perspective functions to convexify the feasibility set of the second stage and second order conic constraints to represent nonlinear constraints in a form that better allows the use of computational solvers. Once these methods have been applied to the risk-neutral model we introduce a risk measure into the first stage that allows us to control the balance between an efficient, solvable model and the need to hedge against extreme events. Using Benders cuts that exploit linear separability, we give a decomposition and solution algorithm for the general network model. The innovations included in this formulation are then implemented on a transportation network with given flow demand

    Embedding Decision Heuristics in Discrete Choice Models: A Review

    Get PDF
    Contrary to the usual assumption of fixed, well-defined preferences, it is increasingly evident that individuals are likely to approach a choice task using rules and decision heuristics that are dependent on the choice environment. More specifically, heuristics that are defined by the local choice context, such as the gains or losses of an attribute value relative to the other attributes, seem to be consistently employed. Recent empirical findings also demonstrate that previous choices and previously encountered choice tasks shown to respondents can affect the current choice outcome, indicating a form of inter-dependence across choice sets. This paper is primarily focused on reviewing how heuristics have been modelled in stated choice data. The paper begins with a review of the heuristics that may be relevant for coping with choice task complexity and then proceeds to discuss some modelling approaches. Next, relational heuristics, such as prospect theory, random regret minimisation and extremeness aversion (compromise effect) are discussed. These are heuristics which operate within the local choice set. Another major class of heuristics reviewed in this paper pertains to ordering effects and more generally, on past outcomes and past attribute levels of the alternatives.Australian Research Council Progra
    • …
    corecore