16,559 research outputs found

    Understandings and Misunderstandings of Multidimensional Poverty Measurement

    Get PDF
    Multidimensional measures provide an alternative lens through which poverty may be viewed and understood. In recent work we have attempted to offer a practical approach to identifying the poor and measuring aggregate poverty (Alkire and Foster 2011). As this is quite a departure from traditional unidimensional and multidimensional poverty measurement – particularly with respect to the identification step – further elaboration may be warranted. In this paper we elucidate the strengths, limitations, and misunderstandings of multidimensional poverty measurement in order to clarify the debate and catalyse further research. We begin with general definitions of unidimensional and multidimensional methodologies for measuring poverty. We provide an intuitive description of our measurement approach, including a ‘dual cutoff’ identification step that views poverty as the state of being multiply deprived, and an aggregation step based on the traditional Foster Greer and Thorbecke (FGT) measures. We briefly discuss five characteristics of our methodology that are easily overlooked or mistaken and conclude with some brief remarks on the way forward.

    Multiple perspectives on the concept of conditional probability

    Get PDF
    Conditional probability is a key to the subjectivist theory of probability; however, it plays a subsidiary role in the usual conception of probability where its counterpart, namely independence is of basic importance. The paper investigates these concepts from various perspectives in order to shed light on their multi-faceted character. We will include the mathematical, philosophical, and educational perspectives. Furthermore, we will inspect conditional probability from the corners of competing ideas and solving strategies. For the comprehension of conditional probability, a wider approach is urgently needed to overcome the well-known problems in learning the concepts, which seem nearly unaffected by teaching

    Mathematics as the role model for neoclassical economics (Blanqui Lecture)

    Get PDF
    Born out of the conscious effort to imitate mechanical physics, neoclassical economics ended up in the mid 20th century embracing a purely mathematical notion of rigor as embodied by the axiomatic method. This lecture tries to explain how this could happen, or, why and when the economists’ role model became the mathematician rather than the physicist. According to the standard interpretation, the triumph of axiomatics in modern neoclassical economics can be explained in terms of the discipline’s increasing awareness of its lack of good experimental and observational data, and thus of its intrinsic inability to fully abide by the paradigm of mechanics. Yet this story fails to properly account for the transformation that the word “rigor” itself underwent first and foremost in mathematics as well as for the existence of a specific motivation behind the economists’ decision to pursue the axiomatic route. While the full argument is developed in Giocoli 2003, these pages offer a taste of a (partially) alternative story which begins with the so-called formalist revolution in mathematics, then crosses the economists’ almost innate urge to bring their discipline to the highest possible level of generality and conceptual integrity, and ends with the advent and consolidation of that very core set of methods, tools and ideas that constitute the contemporary image of economics.Axiomatic method, formalism, rationality, neoclassical economics

    Evidence Propagation and Consensus Formation in Noisy Environments

    Full text link
    We study the effectiveness of consensus formation in multi-agent systems where there is both belief updating based on direct evidence and also belief combination between agents. In particular, we consider the scenario in which a population of agents collaborate on the best-of-n problem where the aim is to reach a consensus about which is the best (alternatively, true) state from amongst a set of states, each with a different quality value (or level of evidence). Agents' beliefs are represented within Dempster-Shafer theory by mass functions and we investigate the macro-level properties of four well-known belief combination operators for this multi-agent consensus formation problem: Dempster's rule, Yager's rule, Dubois & Prade's operator and the averaging operator. The convergence properties of the operators are considered and simulation experiments are conducted for different evidence rates and noise levels. Results show that a combination of updating on direct evidence and belief combination between agents results in better consensus to the best state than does evidence updating alone. We also find that in this framework the operators are robust to noise. Broadly, Yager's rule is shown to be the better operator under various parameter values, i.e. convergence to the best state, robustness to noise, and scalability.Comment: 13th international conference on Scalable Uncertainty Managemen

    Data-Informed Calibration and Aggregation of Expert Judgment in a Bayesian Framework

    Get PDF
    Historically, decision-makers have used expert opinion to supplement lack of data. Expert opinion, however, is applied with much caution. This is because judgment is subjective and contains estimation error with some degree of uncertainty. The purpose of this study is to quantify the uncertainty surrounding the unknown of interest, given an expert opinion, in order to reduce the error of the estimate. This task is carried out by data-informed calibration and aggregation of expert opinion in a Bayesian framework. Additionally, this study evaluates the impact of the number of experts on the accuracy of aggregated estimate. The objective is to determine the correlation between the number of experts and the accuracy of the combined estimate in order to recommend an expert panel size
    • 

    corecore