9,000 research outputs found

    Validating Predictions of Unobserved Quantities

    Full text link
    The ultimate purpose of most computational models is to make predictions, commonly in support of some decision-making process (e.g., for design or operation of some system). The quantities that need to be predicted (the quantities of interest or QoIs) are generally not experimentally observable before the prediction, since otherwise no prediction would be needed. Assessing the validity of such extrapolative predictions, which is critical to informed decision-making, is challenging. In classical approaches to validation, model outputs for observed quantities are compared to observations to determine if they are consistent. By itself, this consistency only ensures that the model can predict the observed quantities under the conditions of the observations. This limitation dramatically reduces the utility of the validation effort for decision making because it implies nothing about predictions of unobserved QoIs or for scenarios outside of the range of observations. However, there is no agreement in the scientific community today regarding best practices for validation of extrapolative predictions made using computational models. The purpose of this paper is to propose and explore a validation and predictive assessment process that supports extrapolative predictions for models with known sources of error. The process includes stochastic modeling, calibration, validation, and predictive assessment phases where representations of known sources of uncertainty and error are built, informed, and tested. The proposed methodology is applied to an illustrative extrapolation problem involving a misspecified nonlinear oscillator

    Stereotype reputation with limited observability

    Get PDF
    Assessing trust and reputation is essential in multi-agent systems where agents must decide who to interact with. Assessment typically relies on the direct experience of a trustor with a trustee agent, or on information from witnesses. Where direct or witness information is unavailable, such as when agent turnover is high, stereotypes learned from common traits and behaviour can provide this information. Such traits may be only partially or subjectively observed, with witnesses not observing traits of some trustees or interpreting their observations differently. Existing stereotype-based techniques are unable to account for such partial observability and subjectivity. In this paper we propose a method for extracting information from witness observations that enables stereotypes to be applied in partially and subjectively observable dynamic environments. Specifically, we present a mechanism for learning translations between observations made by trustor and witness agents with subjective interpretations of traits. We show through simulations that such translation is necessary for reliable reputation assessments in dynamic environments with partial and subjective observability

    Institutions for Intuitive Man

    Get PDF
    By its critics, the rational choice model is routinely accused of being unrealistic. One key objection has it that, for all nontrivial problems, calculating the best response is cognitively way too taxing, given the severe cognitive limitations of the human mind. If one confines the analysis to consciously controlled decision-making, this criticism is certainly warranted. But it ignores a second mental apparatus. Unlike conscious deliberation, this apparatus does not work serially but in parallel. It handles huge amounts of information in almost no time. It only is not consciously accessible. Only the end result is propelled back to consciousness as an intuition. It is too early to decide whether the rational choice model is ultimately even descriptively correct. But at any rate institutional analysts and institutional designers are well advised to take this powerful mechanisms seriously. In appropriate contexts, institutions should see to it that decision-makers trust their intuitions. This frequently creates a dilemma. For better performance is often not the only goal pursued by institutional intervention. Accountability, predictability and regulability are also desired. Sometimes, clever interventions are able to get them both. Arguably, the obligation to write an explicit set of reasons for a court decision is a case in point. The judge is not obliged to report the mental processes by which she has taken her decision. Justification is only ex post control. Intuitive decision-making is even more desirable if the underlying social problem is excessively complex (NP hard, to be specific), or ill-defined. Sometimes, it is enough for society to give room for intuitive decision-making. For instance, in simple social dilemmas, a combination of cheater detection and punishing sentiments does the trick. However, intuition can be misled. For instance, punishing sentiments are triggered by a hurt sense of fairness. Now in more complex social dilemmas, there are competing fairness norms, and people intuitively choose with a self-serving bias. In such contexts, institutions must step in so that clashing intuitions do not lead to social unrest.intuition, consciousness, rational choice, heuristics, ill-defined social problems, institutions

    Adaptation of WASH Services Delivery to Climate Change and Other Sources of Risk and Uncertainty

    Get PDF
    This report urges WASH sector practitioners to take more seriously the threat of climate change and the consequences it could have on their work. By considering climate change within a risk and uncertainty framework, the field can use the multitude of approaches laid out here to adequately protect itself against a range of direct and indirect impacts. Eleven methods and tools for this specific type of risk management are described, including practical advice on how to implement them successfully

    Confidence limits: what is the problem? Is there the solution?

    Get PDF
    This contribution to the debate on confidence limits focuses mostly on the case of measurements with `open likelihood', in the sense that it is defined in the text. I will show that, though a prior-free assessment of {\it confidence} is, in general, not possible, still a search result can be reported in a mostly unbiased and efficient way, which satisfies some desiderata which I believe are shared by the people interested in the subject. The simpler case of `closed likelihood' will also be treated, and I will discuss why a uniform prior on a sensible quantity is a very reasonable choice for most applications. In both cases, I think that much clarity will be achieved if we remove from scientific parlance the misleading expressions `confidence intervals' and `confidence levels'.Comment: 20 pages, 6 figures, using cernrepp.cls (included). Contribution to the Workshop on Confidence Limits, CERN, Geneva, 17-18 January 2000. This paper and related work are also available at http://www-zeus.roma1.infn.it/~agostini/prob+stat.htm

    A Bayesian network model to explore practice change by smallholder rice farmers in Lao PDR

    Get PDF
    © 2018 A Bayesian Network model has been developed that synthesizes findings from concurrent multi-disciplinary research activities. The model describes the many factors that impact on the chances of a smallholder farmer adopting a proposed change to farming practices. The model, when applied to four different proposed technologies, generated insights into the factors that have the greatest influence on adoption rates. Behavioural motivations for change are highly dependent on farmers' individual viewpoints and are also technology dependent. The model provides a boundary object that provides an opportunity to engage experts and other stakeholders in discussions about their assessment of the technology adoption process, and the opportunities, barriers and constraints faced by smallholder farmers when considering whether to adopt a technology

    WikiSensing: A collaborative sensor management system with trust assessment for big data

    Get PDF
    Big Data for sensor networks and collaborative systems have become ever more important in the digital economy and is a focal point of technological interest while posing many noteworthy challenges. This research addresses some of the challenges in the areas of online collaboration and Big Data for sensor networks. This research demonstrates WikiSensing (www.wikisensing.org), a high performance, heterogeneous, collaborative data cloud for managing and analysis of real-time sensor data. The system is based on the Big Data architecture with comprehensive functionalities for smart city sensor data integration and analysis. The system is fully functional and served as the main data management platform for the 2013 UPLondon Hackathon. This system is unique as it introduced a novel methodology that incorporates online collaboration with sensor data. While there are other platforms available for sensor data management WikiSensing is one of the first platforms that enable online collaboration by providing services to store and query dynamic sensor information without any restriction of the type and format of sensor data. An emerging challenge of collaborative sensor systems is modelling and assessing the trustworthiness of sensors and their measurements. This is with direct relevance to WikiSensing as an open collaborative sensor data management system. Thus if the trustworthiness of the sensor data can be accurately assessed, WikiSensing will be more than just a collaborative data management system for sensor but also a platform that provides information to the users on the validity of its data. Hence this research presents a new generic framework for capturing and analysing sensor trustworthiness considering the different forms of evidence available to the user. It uses an extensible set of metrics that can represent such evidence and use Bayesian analysis to develop a trust classification model. Based on this work there are several publications and others are at the final stage of submission. Further improvement is also planned to make the platform serve as a cloud service accessible to any online user to build up a community of collaborators for smart city research.Open Acces
    corecore