3,981 research outputs found

    Arithmetic computation with probability words and numbers

    Get PDF
    Probability information is regularly communicated to experts who must fuse multiple estimates to support decision-making. Such information is often communicated verbally (e.g., “likely”) rather than with precise numeric (point) values (e.g., “.75”), yet people are not taught to perform arithmetic on verbal probabilities. We hypothesized that the accuracy and logical coherence of averaging and multiplying probabilities will be poorer when individuals receive probability information in verbal rather than numerical point format. In four experiments (N = 213, 201, 26, and 343, respectively), we manipulated probability communication format between-subjects. Participants averaged and multiplied sets of four probabilities. Across experiments, arithmetic accuracy and coherence was significantly better with point than with verbal probabilities. These findings generalized between expert (intelligence analysts) and non-expert samples and when controlling for calculator use. Experiment 4 revealed an important qualification: whereas accuracy and coherence were better among participants presented with point probabilities than with verbal probabilities, imprecise numeric probability ranges (e.g., “.70 to .80”) afforded no computational advantage over verbal probabilities. Experiment 4 also revealed that the advantage of the point over the verbal format is partially mediated by strategy use. Participants presented with point estimates are more likely to use mental computation than guesswork, and mental computation was found to be associated with better accuracy. Our findings suggest that where computation is important, probability information should be communicated to end users with precise numeric probabilities

    Assessing the communication quality of CSR reports. A case study on four Spanish food companies

    Get PDF
    This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license.This article belongs to the Section Economic, Business and Management Aspects of Sustainability.Sustainability reports are tools for disseminating information to stakeholders and the public, serving the organizations in the dual purpose of communicating CSR and being accountable. The production of these reports has recently become more prevalent in the food industry, despite the fact this practice has received heavy criticism on two fronts: The quality of the tool for communication, and the extent of accountability. In addition to these criticisms, organizations must overcome the additional challenge of publishing sustainability reports that successfully meet the demands of a multi-stakeholder audience. In light of the importance of this practice, this paper presents a method to assess the communication and accountability characteristics of Spanish food companies' sustainability reports. This method is based on the method Analytic Network Process (ANP) and adopts a multi-stakeholder approach. This research, therefore, provides a reference model for improving sustainability reports, with the aim of successfully meeting their communication objectives and the demands of all stakeholders.This research has been conducted within the research activities of the Master in Corporate Social Responsibility at the Universitat PolitĂšcnica de ValĂšncia (http://www.master-rsc.upv.es/).We acknowledge support by the CSIC Open Access Publication Initiative through its Unit of Information Resources for Research (URICI).Peer Reviewe

    Using Experts' Beliefs to Inform Public Policy: Capturing and Using the Views of Many

    Get PDF
    Cost-effectiveness decision modelling (CEDM) is widely used to inform healthcare resource allocation, however there is often a paucity of data to quantify the level of uncertainty around model parameters. Expert elicitation has been proposed as a method for quantifying uncertainty when other sources of evidence are not available. Elicitation refers to formal processes for quantifying experts’ beliefs about uncertain quantities, typically as probability distributions. It is generally conducted with multiple experts to minimise bias and ensure representation of experts with different perspectives. In CEDM, priors are most commonly elicited from individual experts then pooled mathematically into an aggregate prior that is subsequently used in the model. When pooling priors mathematically, the investigator must decide whether to weight all experts equally or assume that some experts in the sample should be given ‘more say’. The choice of method for deriving weights for experts’ priors can affect the resulting estimates of uncertainty, yet it is not clear which method is optimal. This thesis develops an understanding of the methods for deriving weights in opinion pooling. A literature review is first conducted to identify the existing methods for deriving weights. Differences between the identified methods are then analysed and discussed in terms of how they affect the role of each method in elicitation. The developed principles are then applied in a case study, where experts’ priors on the effectiveness of a health intervention are elicited, and used to characterise parametric uncertainty in a CEDM. The findings are used to analyse and compare different methods for weighting priors, and to observe the consequences of using different methods in the decision model. The findings improve the understanding of how different weighting methods capture experts’ ‘contributions’ while the choice of methods for deriving weights is found to influence the decision generated by the model

    Improving judgments of existential risk: better forecasts, questions, explanations, policies

    Get PDF
    Forecasting tournaments are misaligned with the goal of producing actionable forecasts of existential risk, an extreme-stakes domain with slow accuracy feedback and elusive proxies for long-run outcomes. We show how to improve alignment by measuring facets of human judgment that play central roles in policy debates but have long been dismissed as unmeasurable. The key is supplementing traditional objective accuracy metrics with intersubjective metrics that test forecasters’ skill at predicting other forecasters’ judgments on topics that resist objective scoring, such as longrange scenarios, probativeness of questions, insightfulness of explanations, and impactfulness of risk-mitigation options. We focus on the value of Reciprocal Scoring, an intersubjective method grounded in micro-economic research that challenges top forecasters to predict each other’s judgments. Even if cumulative information gains prove modest and are confined to a 1-to-5 year planning horizon, the expected value of lives saved would be massive

    Improving water asset management when data are sparse

    Get PDF
    Ensuring the high of assets in water utilities is critically important and requires continuous improvement. This is due to the need to minimise risk of harm to human health and the environment from contaminated drinking water. Continuous improvement and innovation in water asset management are therefore, necessary and are driven by (i) increased regulatory requirements on serviceability; (ii) high maintenance costs, (iii) higher customer expectations, and (iv) enhanced environmental and health/safety requirements. High quality data on asset failures, maintenance, and operations are key requirements for developing reliability models. However, a literature search revealed that, in practice, there is sometimes limited data in water utilities - particularly for over-ground assets. Perhaps surprisingly, there is often a mismatch between the ambitions of sophisticated reliability tools and the availability of asset data water utilities are able to draw upon to implement them in practice. This research provides models to support decision-making in water utility asset management when there is limited data. Three approaches for assessing asset condition, maintenance effectiveness and selecting maintenance regimes for specific asset groups were developed. Expert elicitation was used to test and apply the developed decision-support tools. A major regional water utility in England was used as a case study to investigate and test the developed approaches. The new approach achieved improved precision in asset condition assessment (Figure 3–3a) - supporting the requirements of the UK Capital Maintenance Planning Common Framework. Critically, the thesis demonstrated that, on occasion, assets were sometimes misallocated by more than 50% between condition grades when using current approaches. Expert opinions were also sought for assessing maintenance effectiveness, and a new approach was tested with over-ground assets. The new approach’s value was demonstrated by the capability to account for finer measurements (as low as 10%) of maintenance effectiveness (Table 4-4). An asset maintenance regime selection approach was developed to support decision-making when data are sparse. The value of the approach is its versatility in selecting different regimes for different asset groups, and specifically accounting for the assets unique performance variables

    A framework for the Comparative analysis of text summarization techniques

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceWe see that with the boom of information technology and IOT (Internet of things), the size of information which is basically data is increasing at an alarming rate. This information can always be harnessed and if channeled into the right direction, we can always find meaningful information. But the problem is this data is not always numerical and there would be problems where the data would be completely textual, and some meaning has to be derived from it. If one would have to go through these texts manually, it would take hours or even days to get a concise and meaningful information out of the text. This is where a need for an automatic summarizer arises easing manual intervention, reducing time and cost but at the same time retaining the key information held by these texts. In the recent years, new methods and approaches have been developed which would help us to do so. These approaches are implemented in lot of domains, for example, Search engines provide snippets as document previews, while news websites produce shortened descriptions of news subjects, usually as headlines, to make surfing easier. Broadly speaking, there are mainly two ways of text summarization – extractive and abstractive summarization. Extractive summarization is the approach in which important sections of the whole text are filtered out to form the condensed form of the text. While the abstractive summarization is the approach in which the text as a whole is interpreted and examined and after discerning the meaning of the text, sentences are generated by the model itself describing the important points in a concise way
    • 

    corecore