79,588 research outputs found

    More than one way to see it: Individual heuristics in avian visual computation

    Get PDF
    Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species’ ability to process pattern classes or different species’ performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds’ choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally

    Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport

    Full text link
    Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the Continuous Fairness Algorithm (CFAθ\theta) which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between specific concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate ``worldviews'' on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (credit applications; college admissions; insurance contracts) and map out the legal and policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment

    A review of domain adaptation without target labels

    Full text link
    Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: how can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research.Comment: 20 pages, 5 figure

    Safety Net Design and Systemic Risk: New Empirical Evidence

    Get PDF
    Recent econometric evidence has noticeably changed views on the desirability and the appropriate design of explicit Deposit Insurance Schemes (DIS). The purpose of this paper is to take a second look at the data. After surveying recent empirical work and providing a theoretical framework, we argue that existing studies may suffer from a selection bias. Building on a new database on explicit deposit insurance compiled by the author, we perform a variety of semi-parametric and parametric tests to see whether and how explicit deposit insurance (de)stabilizes banking systems. We find that the evidence indeed suggests that a selection bias is present. Controlling for this bias leads to a reassessment of recent studies. In particular, making deposit insurance explicit has a rather moderate and, if any, stabilizing effect on the probability of experiencing a systemic crisis

    Explanation and Elaboration Document for the STROBE-Vet Statement: Strengthening the Reporting of Observational Studies in Epidemiology—Veterinary Extension

    Get PDF
    The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement was first published in 2007 and again in 2014. The purpose of the original STROBE was to provide guidance for authors, reviewers and editors to improve the comprehensiveness of reporting; however, STROBE has a unique focus on observational studies. Although much of the guidance provided by the original STROBE document is directly applicable, it was deemed useful to map those statements to veterinary concepts, provide veterinary examples and highlight unique aspects of reporting in veterinary observational studies. Here, we present the examples and explanations for the checklist items included in the STROBE-Vet Statement. Thus, this is a companion document to the STROBE-Vet Statement Methods and process document, which describes the checklist and how it was developed

    Delayering and Firm Performance: Evidence from Swiss firm-level Data

    Get PDF
    The past decades witnessed a broad trend towards flatter organizations with less hierarchical layers. A reduction of the number of management levels in a corpora- tion can have both positive and negative effects on firm performance with the net effect being theoretically unclear ex ante. The present study uses a nationally representative� data� set� of firms� in Switzerland� and empirically� examines� the� di- rect performance effects of delayering. Applying ordinary least squares regressions and propensity� score� matching,� this� study� finds� that� delayering significantly� increases subsequent firm performance. It can be concluded that flatter hierarchical structures seem to enable firms to better realize their competitive advantage in today’s fast moving and knowledge-intensive market environment.
    • …
    corecore