214 research outputs found

    The cost of coordination can exceed the benefit of collaboration in performing complex tasks

    Get PDF
    Humans and other intelligent agents often rely on collective decision making based on an intuition that groups outperform individuals. However, at present, we lack a complete theoretical understanding of when groups perform better. Here, we examine performance in collective decision making in the context of a real-world citizen science task environment in which individuals with manipulated differences in task-relevant training collaborated. We find 1) dyads gradually improve in performance but do not experience a collective benefit compared to individuals in most situations; 2) the cost of coordination to efficiency and speed that results when switching to a dyadic context after training individually is consistently larger than the leverage of having a partner, even if they are expertly trained in that task; and 3) on the most complex tasks having an additional expert in the dyad who is adequately trained improves accuracy. These findings highlight that the extent of training received by an individual, the complexity of the task at hand, and the desired performance indicator are all critical factors that need to be accounted for when weighing up the benefits of collective decision making

    Artificial intelligence in government: Concepts, standards, and a unified framework

    Full text link
    Recent advances in artificial intelligence (AI), especially in generative language modelling, hold the promise of transforming government. Given the advanced capabilities of new AI systems, it is critical that these are embedded using standard operational procedures, clear epistemic criteria, and behave in alignment with the normative expectations of society. Scholars in multiple domains have subsequently begun to conceptualize the different forms that AI applications may take, highlighting both their potential benefits and pitfalls. However, the literature remains fragmented, with researchers in social science disciplines like public administration and political science, and the fast-moving fields of AI, ML, and robotics, all developing concepts in relative isolation. Although there are calls to formalize the emerging study of AI in government, a balanced account that captures the full depth of theoretical perspectives needed to understand the consequences of embedding AI into a public sector context is lacking. Here, we unify efforts across social and technical disciplines by first conducting an integrative literature review to identify and cluster 69 key terms that frequently co-occur in the multidisciplinary study of AI. We then build on the results of this bibliometric analysis to propose three new multifaceted concepts for understanding and analysing AI-based systems for government (AI-GOV) in a more unified way: (1) operational fitness, (2) epistemic alignment, and (3) normative divergence. Finally, we put these concepts to work by using them as dimensions in a conceptual typology of AI-GOV and connecting each with emerging AI technical measurement standards to encourage operationalization, foster cross-disciplinary dialogue, and stimulate debate among those aiming to rethink government with AI.Comment: 35 pages with references and appendix, 3 tables, 2 figure

    A multidomain relational framework to guide institutional AI research and adoption

    Get PDF
    Calls for new metrics, technical standards and governance mechanisms to guide the adoption of Artificial Intelligence (AI) in institutions and public administration are now commonplace. Yet, most research and policy efforts aimed at understanding the implications of adopting AI tend to prioritize only a handful of ideas; they do not fully connect all the different perspectives and topics that are potentially relevant. In this position paper, we contend that this omission stems, in part, from what we call the ‘relational problem’ in socio-technical discourse: fundamental ontological issues have not yet been settled—including semantic ambiguity, a lack of clear relations between concepts and differing standard terminologies. This contributes to the persistence of disparate modes of reasoning to assess institutional AI systems, and the prevalence of conceptual isolation in the fields that study them including ML, human factors, social science and policy. After developing this critique, we offer a way forward by proposing a simple policy and research design tool in the form of a conceptual framework to organize terms across fields—consisting of three horizontal domains for grouping relevant concepts and related methods: Operational, Epistemic, and Normative. We first situate this framework against the backdrop of recent socio-technical discourse at two premier academic venues, AIES and FAccT, before illustrating how developing suitable metrics, standards, and mechanisms can be aided by operationalizing relevant concepts in each of these domains. Finally, we outline outstanding questions for developing this relational approach to institutional AI research and adoption

    Unsupervised feature extraction of aerial images for clustering and understanding hazardous road segments

    Get PDF
    Aerial image data are becoming more widely available, and analysis techniques based on supervised learning are advancing their use in a wide variety of remote sensing contexts. However, supervised learning requires training datasets which are not always available or easy to construct with aerial imagery. In this respect, unsupervised machine learning techniques present important advantages. This work presents a novel pipeline to demonstrate how available aerial imagery can be used to better the provision of services related to the built environment, using the case study of road traffic collisions (RTCs) across three cities in the UK. In this paper, we show how aerial imagery can be leveraged to extract latent features of the built environment from the purely visual representation of top-down images. With these latent image features in hand to represent the urban structure, this work then demonstrates how hazardous road segments can be clustered to provide a data-augmented aid for road safety experts to enhance their nuanced understanding of how and where different types of RTCs occur

    Mitigation of Cognitive Bias with a Serious Game: Two Experiments Testing Feedback Timing and Source

    Get PDF
    One of the benefits of using digital games for education is that games can provide feedback for learners to assess their situation and correct their mistakes. We conducted two studies to examine the effectiveness of different feedback design (timing, duration, repeats, and feedback source) in a serious game designed to teach learners about cognitive biases. We also compared the digital game-based learning condition to a professional training video. Overall, the digital game was significantly more effective than the video condition. Longer durations and repeats improve the effects on bias-mitigation. Surprisingly, there was no significant difference between just-in-time feedback and delayed feedback, and computer-generated feedback was more effective than feedback from other players

    Mitigation of Cognitive Bias with a Serious Game: Two Experiments Testing Feedback Timing and Source

    Get PDF
    One of the benefits of using digital games for education is that games can provide feedback for learners to assess their situation and correct their mistakes. We conducted two studies to examine the effectiveness of different feedback design (timing, duration, repeats, and feedback source) in a serious game designed to teach learners about cognitive biases. We also compared the digital game-based learning condition to a professional training video. Overall, the digital game was significantly more effective than the video condition. Longer durations and repeats improve the effects on bias-mitigation. Surprisingly, there was no significant difference between just-in-time feedback and delayed feedback, and computer-generated feedback was more effective than feedback from other players

    Imaging mass cytometry analysis of Becker muscular dystrophy muscle samples reveals different stages of muscle degeneration

    Get PDF
    \ua9 2024. The Author(s). Becker muscular dystrophy (BMD) is characterised by fiber loss and expansion of fibrotic and adipose tissue. Several cells interact locally in what is known as the degenerative niche. We analysed muscle biopsies of controls and BMD patients at early, moderate and advanced stages of progression using Hyperion imaging mass cytometry (IMC) by labelling single sections with 17 markers identifying different components of the muscle. We developed a software for analysing IMC images and studied changes in the muscle composition and spatial correlations between markers across disease progression. We found a strong correlation between collagen-I and the area of stroma, collagen-VI, adipose tissue, and M2-macrophages number. There was a negative correlation between the area of collagen-I and the number of satellite cells (SCs), fibres and blood vessels. The comparison between fibrotic and non-fibrotic areas allowed to study the disease process in detail. We found structural differences among non-fibrotic areas from control and patients, being these latter characterized by increase in CTGF and in M2-macrophages and decrease in fibers and blood vessels. IMC enables to study of changes in tissue structure along disease progression, spatio-temporal correlations and opening the door to better understand new potential pathogenic pathways in human samples

    GYOTO: a new general relativistic ray-tracing code

    Full text link
    GYOTO, a general relativistic ray-tracing code, is presented. It aims at computing images of astronomical bodies in the vicinity of compact objects, as well as trajectories of massive bodies in relativistic environments. This code is capable of integrating the null and timelike geodesic equations not only in the Kerr metric, but also in any metric computed numerically within the 3+1 formalism of general relativity. Simulated images and spectra have been computed for a variety of astronomical targets, such as a moving star or a toroidal accretion structure. The underlying code is open source and freely available. It is user-friendly, quickly handled and very modular so that extensions are easy to integrate. Custom analytical metrics and astronomical targets can be implemented in C++ plug-in extensions independent from the main code.Comment: 20 pages, 11 figure
    • …
    corecore