13,601 research outputs found

    Approximate Judgement Aggregation

    Get PDF
    In this paper we analyze judgement aggregation problems in which a group of agents independently votes on a set of complex propositions that has some interdependency constraint between them (e.g., transitivity when describing preferences). We consider the issue of judgement aggregation from the perspective of approximation. That is, we generalize the previous results by studying approximate judgement aggregation. We relax the main two constraints assumed in the current literature, Consistency and Independence and consider mechanisms that only approximately satisfy these constraints, that is, satisfy them up to a small portion of the inputs. The main question we raise is whether the relaxation of these notions significantly alters the class of satisfying aggregation mechanisms. The recent works for preference aggregation of Kalai, Mossel, and Keller fit into this framework. The main result of this paper is that, as in the case of preference aggregation, in the case of a subclass of a natural class of aggregation problems termed `truth-functional agendas', the set of satisfying aggregation mechanisms does not extend non-trivially when relaxing the constraints. Our proof techniques involve Boolean Fourier transform and analysis of voter influences for voting protocols. The question we raise for Approximate Aggregation can be stated in terms of Property Testing. For instance, as a corollary from our result we get a generalization of the classic result for property testing of linearity of Boolean functions.judgement aggregation, truth-functional agendas, computational social choice, computational judgement aggregation, approximate aggregation, inconsistency index, dependency index

    The wisdom of collective grading and the effects of epistemic and semantic diversity

    Get PDF
    A computer simulation is used to study collective judgements that an expert panel reaches on the basis of qualitative probability judgements contributed by individual members. The simulated panel displays a strong and robust crowd wisdom effect. The panel's performance is better when members contribute precise probability estimates instead of qualitative judgements, but not by much. Surprisingly, it doesn't always hurt for panel members to interpret the probability expressions differently. Indeed, coordinating their understandings can be much worse

    Intangibles mismeasurements, synergy, and accounting numbers : a note.

    Get PDF
    For the last two decades, authors (e.g. Ohlson, 1995; Lev, 2000, 2001) have regularly pointed out the enforcement of limitations by traditional accounting frameworks on financial reporting informativeness. Consistent with this claim, it has been then argued that accounting finds one of its major limits in not allowing for direct recognition of synergy occurring amongst the firm intangible and tangible items (Casta, 1994; Casta & Lesage, 2001). Although the firm synergy phenomenon has been widely documented in the recent accounting literature (see for instance, Hand & Lev, 2004; Lev, 2001) research hitherto has failed to provide a clear approach to assess directly and account for such a henceforth fundamental corporate factor. The objective of this paper is to raise and examine, but not address exhaustively, the specific issues induced by modelling the synergy occurring amongst the firm assets whilst pointing out the limits of traditional accounting valuation tools. Since financial accounting valuation methods are mostly based on the mathematical property of additivity, and consequently may occult the perspective of regarding the firm as an organized set of assets, we propose an alternative valuation approach based on non-additive measures issued from the Choquet's (1953) and Sugeno's (1997) framework. More precisely, we show how this integration technique with respect to a non-additive measure can be used to cope with either positive or negative synergy in a firm value-building process and then discuss its potential future implications for financial reporting.Financial reporting; accounting goodwill; assets synergy; non-additive measures; Choquet’s framework;

    Synergy Modelling and Financial Valuation : the contribution of Fuzzy Integrals.

    Get PDF
    Les méthodes d’évaluation financière utilisent des opérateurs d’agrégation reposant sur les propriétés d’additivité (sommations, intégrales de Lebesgue). De ce fait, elles occultent les phénomènes de renforcement et de synergie (ou de redondance) qui peuvent exister entre les éléments d’un ensemble organisé. C’est particulièrement le cas en ce qui concerne le problème d’évaluation financière du patrimoine d’une entreprise : en effet, en pratique, il est souvent mis en évidence une importante différence de valorisation entre l’approche « valeur de la somme des éléments » (privilégiant le point de vue financier) et l’approche « somme de la valeur des différents éléments » (privilégiant le point de vue comptable). Les possibilités offertes par des opérateurs d’agrégation comme les intégrales floues (Sugeno, Grabisch, Choquet) permettent, au plan théorique, de modéliser l’effet de synergie. La présente étude se propose de valider empiriquement les modalités d’implémentation opérationnelle de ce modèle à partir d’un échantillon d’entreprises cotées ayant fait l’objet d’une évaluation lors d’une OPA.Financial valuation methods use additive aggregation operators. But a patrimony should be regarded as an organized set, and additivity makes it impossible for these aggregation operators to formalize such phenomena as synergy or mutual inhibition between the patrimony’s components. This paper considers the application of fuzzy measure and fuzzy integrals (Sugeno, Grabisch, Choquet) to financial valuation. More specifically, we show how integration with respect to a non additive measure can be used to handle positive or negative synergy in value construction.Fuzzy measure; Fuzzy integral; Aggregation operator; Synergy; Financial valuation;

    Citizen participation and awareness raising in coastal protected areas. A case study from Italy

    Get PDF
    In this chapter, part of the research carried out within the SECOA project (www.projectsecoa.eu) is presented. Attention is devoted to methods and tools used for supporting the participatory process in a case of environmental conflict related to the definition of boundaries of a coastal protected area: the Costa Teatina National Park, in Abruzzo, central Italy. The Costa Teatina National Park was established by the National Law 93/2001. Its territory includes eight southern Abruzzo municipalities and covers a stretch of coastline of approximately 60 km. It is a coastal protected area, which incorporates land but not sea, characterized by the presence of important cultural and natural assets. The Italian Ministry of Environment (1998) defines the area as “winding and varied, with the alternation of sandy and gravel beaches, cliffs, river mouths, areas rich in indigenous vegetation and cultivated lands (mainly olives), dunes and forest trees”. The park boundaries were not defined by the law that set it up, and their determination has been postponed to a later stage of territorial negotiation that has not ended yet (Montanari and Staniscia, 2013). The definition of the park boundaries, indeed, has resulted in an intense debate between citizens and interest groups who believe that environmental protection does not conflict with economic growth and those who believe the opposite. That is why the process is still in act and a solution is far from being reached. In this chapter, the methodology and the tools used to involve the general public in active participation in decision making and to support institutional players in conflict mitigation will be presented. Those tools have also proven to be effective in the dissemination of information and transfer of knowledge. Results obtained through the use of each instrument will not be presented here since this falls outside the purpose of the present essay. The chapter is organized as follows: in the first section the importance of the theme of citizen participation in decision making will be highlighted; the focus will be on participation in the processes of ICZM, relevant to the management of coastal protected areas. In the second section a review of the most commonly used methods in social research is presented; advantages and disadvantages of each of them will be highlighted. In particular, the history and the evolution of the Delphi method and its derivatives are discussed; focus will be on the dissemination value of the logic underlying such iterative methods. In the third section the tools used in the case of the Costa Teatina National Park will be presented; strengths and weaknesses will be highlighted and proposals for their improvement will be advanced. Discussion and conclusions follow

    Beyond epistemic democracy: the identification and pooling of information by groups of political agents.

    Get PDF
    This thesis addresses the mechanisms by which groups of agents can track the truth, particularly in political situations. I argue that the mechanisms which allow groups of agents to track the truth operate in two stages: firstly, there are search procedures; and secondly, there are aggregation procedures. Search procedures and aggregation procedures work in concert. The search procedures allow agents to extract information from the environment. At the conclusion of a search procedure the information will be dispersed among different agents in the group. Aggregation procedures, such as majority rule, expert dictatorship and negative reliability unanimity rule, then pool these pieces of information into a social choice. The institutional features of both search procedures and aggregation procedures account for the ability of groups to track the truth and amount to social epistemic mechanisms. Large numbers of agents are crucial for the epistemic capacities of both search procedures and aggregation procedures. This thesis makes two main contributions to the literature on social epistemology and epistemic democracy. Firstly, most current accounts focus on the Condorcet Jury Theorem and its extensions as the relevant epistemic mechanism that can operate in groups of political agents. The introduction of search procedures to epistemic democracy is (mostly) new. Secondly, the thesis introduces a two-stage framework to the process of group truth-tracking. In 4 addition to showing how the two procedures of search and aggregation can operate in concert, the framework highlights the complexity of social choice situations. Careful consideration of different types of social choice situation shows that different aggregation procedures will be optimal truth-trackers in different situations. Importantly, there will be some situations in which aggregation procedures other than majority rule will be best at tracking the truth

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

    Full text link
    While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at https://www.github.com/richzhang/PerceptualSimilarit

    Using simulation gaming to validate a mathematical modeling platform for resource allocation in disasters

    Get PDF
    The extraordinary conditions of a disaster require the mobilisation of all available resources, inducing the rush of humanitarian partners into the affected area This phenomenon called the proliferation of actors, causes serious problems during the disaster response phase including the oversupply, duplicated efforts, lack of planning In an attempt to reduce the partner proliferation problem a framework called PREDIS (PREdictive model for DISaster response partner selection) is put forward to configure the humanitarian network within early hours after disaster strike when the information is scarce To verify this model a simulation game is designed using two sets of real decision makers (experts and non-experts) in the disaster Haiyan scenario The result shows that using the PREDIS framework 100% of the experts could make the same decisions less than six hours comparing to 72 hours Also between 71% and 86% of the times experts and non-experts decide similarly using the PREDIS framewor

    The Wisdom of the Inner Crowd in Three Large Natural Experiments

    Get PDF
    The quality of decisions depends on the accuracy of estimates of relevant quantities. According to the wisdom of crowds principle, accurate estimates can be obtained by combining the judgements of different individuals 1,2. This principle has been successfully applied to improve, for example, economic forecasts 3-5, medical judgements 6-9 and meteorological predictions 10-13. Unfortunately, there are many situations in which it is infeasible to collect judgements of others. Recent research proposes that a similar principle applies to repeated judgements from the same person 14. This paper tests this promising approach on a large scale in a real-world context. Using proprietary data comprising 1.2 million observations from three incentivized guessing competitions, we find that within-person aggregation indeed improves accuracy and that the method works better when there is a time delay between subsequent judgements. However, the benefit pales against that of between-person aggregation: the average of a large number of judgements from the same person is barely better than the average of two judgements from different people
    • …
    corecore