3,859 research outputs found

    Automated Update Tools To Augment the Wisdom of Crowds in Geopolitical Forecasting

    Get PDF
    Despite the importance of predictive judgments, individual human forecasts are frequently less accurate than those of even simple prediction algorithms. At the same time, not all forecasts are amenable to algorithmic prediction. Here, we describe the evaluation of an automated prediction tool that enabled participants to create simple rules that monitored relevant indicators (e.g., commodity prices) to automatically update forecasts. We examined these rules in both a pool of previous participants in a geopolitical forecasting tournament (Study 1) and a naïve sample recruited from Mechanical Turk (Study 2). Across the two studies, we found that automated updates tended to improve forecast accuracy relative to initial forecasts and were comparable to manual updates. Additionally, making rules improved the accuracy of manual updates. Crowd forecasts likewise benefitted from rule-based updates. However, when presented with the choice of whether to accept, reject or adjust an automatic forecast update, participants showed little ability to discriminate between automated updates that were harmful versus beneficial to forecast accuracy. Simple prospective rule-based tools are thus able to improve forecast accuracy by offering accurate and efficient updates, but ensuring forecasters make use of tools remains a challenge

    Recalibrating machine learning for social biases: demonstrating a new methodology through a case study classifying gender biases in archival documentation

    Get PDF
    This thesis proposes a recalibration of Machine Learning for social biases to minimize harms from existing approaches and practices in the field. Prioritizing quality over quantity, accuracy over efficiency, representativeness over convenience, and situated thinking over universal thinking, the thesis demonstrates an alternative approach to creating Machine Learning models. Drawing on GLAM, the Humanities, the Social Sciences, and Design, the thesis focuses on understanding and communicating biases in a specific use case. 11,888 metadata descriptions from the University of Edinburgh Heritage Collections' Archives catalog were manually annotated for gender biases and text classification models were then trained on the resulting dataset of 55,260 annotations. Evaluations of the models' performance demonstrates that annotating gender biases can be automated; however, the subjectivity of bias as a concept complicates the generalizability of any one approach. The contributions are: (1) an interdisciplinary and participatory Bias-Aware Methodology, (2) a Taxonomy of Gendered and Gender Biased Language, (3) data annotated for gender biased language, (4) gender biased text classification models, and (5) a human-centered approach to model evaluation. The contributions have implications for Machine Learning, demonstrating how bias is inherent to all data and models; more specifically for Natural Language Processing, providing an annotation taxonomy, annotated datasets and classification models for analyzing gender biased language at scale; for the Gallery, Library, Archives, and Museum sector, offering guidance to institutions seeking to reconcile with histories of marginalizing communities through their documentation practices; and for historians, who utilize cultural heritage documentation to study and interpret the past. Through a real-world application of the Bias-Aware Methodology in a case study, the thesis illustrates the need to shift away from removing social biases and towards acknowledging them, creating data and models that surface the uncertainty and multiplicity characteristic of human societies

    Generation of probabilistic synthetic data for serious games: A case study on cyberbullying

    Get PDF
    This is the final version. Available on open access from Elsevier via the DOI in this recordData availability: Data will be made available on request.Synthetic data generation has been a growing area of research in recent years. However, its potential applications in serious games have yet to be thoroughly explored. Advances in this field could anticipate data modeling and analysis, as well as speed up the development process. To fill this gap in the literature, we propose a simulator architecture for generating probabilistic synthetic data for decision-based serious games. This architecture is designed to be versatile and modular so that it can be used by other researchers on similar problems (e.g., multiple choice exams, political surveys, any type of questionnaire). To simulate the interaction of synthetic players with the game, we use a cognitive testing model based on the Item Response Theory framework. We also show how probabilistic graphical models (in particular, Bayesian networks) can introduce expert knowledge and external data into the simulation. Finally, we apply the proposed architecture and methods in the case of a serious game focused on cyberbullying. We perform Bayesian inference experiments using a hierarchical model to demonstrate the identifiability and robustness of the generated data.European Union Horizon 202

    Disciplinary Rhetoric: And the Language of Online Rape Culture

    Get PDF
    Mastergradsoppgave i digital kulturDIKULT350MAHF-DIKU

    Effective player guidance in logic puzzles

    Get PDF
    Pen & paper puzzle games are an extremely popular pastime, often enjoyed by demographics normally not considered to be ‘gamers’. They are increasingly used as ‘serious games’ and there has been extensive research into computationally generating and efficiently solving them. However, there have been few academic studies that have focused on the players themselves. Presenting an appropriate level of challenge to a player is essential for both player enjoyment and engagement. Providing appropriate assistance is an essential mechanic for making a game accessible to a variety of players. In this thesis, we investigate how players solve Progressive Pen & Paper Puzzle Games (PPPPs) and how to provide meaningful assistance that allows players to recover from being stuck, while not reducing the challenge to trivial levels. This thesis begins with a qualitative in-person study of Sudoku solving. This study demonstrates that, in contrast to all existing assumptions used to model players, players were unsystematic, idiosyncratic and error-prone. We then designed an entirely new approach to providing assistance in PPPPs, which guides players towards easier deductions rather than, as current systems do, completing the next cell for them. We implemented a novel hint system using our design, with the assessment of the challenge being done using Minimal Unsatisfiable Sets (MUSs). We conducted four studies, using two different PPPPs, that evaluated the efficacy of the novel hint system compared to the current hint approach. The studies demonstrated that our novel hint system was as helpful as the existing system while also improving the player experience and feeling less like cheating. Players also chose to use our novel hint system significantly more often. We have provided a new approach to providing assistance to PPPP players and demonstrated that players prefer it over existing approaches

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Creativity: Avalanche in the Sand-pile

    Get PDF
    [A revised and updated version of an earlier article 'Understanding Creativity: Affect Decision and Inference' (unpublished), posted at PhilArchive in 2021.] This book looks at the creative process in the human mind. Creativity involves a major restructuring of the conceptual space where a sustained inferential process eventually links remote conceptual domains, thereby opening up the possibility of a large number of new correlations between remote concepts by a cascading process. Since the process of inductive inference depends crucially on decisions at critical junctures of the inferential chain, it becomes necessary to examine the basic mechanism underlying the making of decisions. In the framework that we attempt to build up for the understanding of scientific creativity, this role of decision making in the inferential process assumes central relevance. Referring to the process of inferential exploration of the conceptual space that generates the possibility of correlations being established between remote conceptual domains, such exploration is guided and steered at every stage by the affect system, While the affect system plays a guiding role in the exploration of the conceptual space, the process of exploration itself consists of the establishment of correlations between concepts by means of beliefs and heuristics, the self-linked ones among the latter having a special role in making possible the inferential journey along alternative routes whenever the shared rules of inference become inadequate. Representing the conceptual space in the form of a complex network, the overall process can be likened to one of self-organised criticality commonly observed in the dynamical evolution of complex systems. An instance of self-organised criticality is found in the avalanche set up in a slowly growing sand-pile

    Conversations on Empathy

    Get PDF
    In the aftermath of a global pandemic, amidst new and ongoing wars, genocide, inequality, and staggering ecological collapse, some in the public and political arena have argued that we are in desperate need of greater empathy — be this with our neighbours, refugees, war victims, the vulnerable or disappearing animal and plant species. This interdisciplinary volume asks the crucial questions: How does a better understanding of empathy contribute, if at all, to our understanding of others? How is it implicated in the ways we perceive, understand and constitute others as subjects? Conversations on Empathy examines how empathy might be enacted and experienced either as a way to highlight forms of otherness or, instead, to overcome what might otherwise appear to be irreducible differences. It explores the ways in which empathy enables us to understand, imagine and create sameness and otherness in our everyday intersubjective encounters focusing on a varied range of "radical others" – others who are perceived as being dramatically different from oneself. With a focus on the importance of empathy to understand difference, the book contends that the role of empathy is critical, now more than ever, for thinking about local and global challenges of interconnectedness, care and justice

    Development of a SQUID magnetometry system for cryogenic neutron electric dipole moment experiment

    Get PDF
    A measurement of the neutron electric dipole moment (nEDM) could hold the key to understanding why the visible universe is the way it is: why matter should predominate over antimatter. As a charge-parity violating (CPV) quantity, an nEDM could provide an insight into new mechanisms that address this baryon asymmetry. The motivation for an improved sensitivity to an nEDM is to find it to be non-zero at a level consistent with certain beyond the Standard Model theories that predict new sources of CPV, or to establish a new limit that constrains them. CryoEDM is an experiment that sought to better the current limit of dn<2.9×1026e|d_n| < 2.9 \times 10^{-26}\,e\,cm by an order of magnitude. It is designed to measure the nEDM via the Ramsey Method of Separated Oscillatory Fields, in which it is critical that the magnetic field remains stable throughout. A way of accurately tracking the magnetic fields, moreover at a temperature 0.5\sim 0.5\,K, is crucial for CryoEDM, and for future cryogenic projects. This thesis presents work focussing on the development of a 12-SQUID magnetometry system for CryoEDM, that enables the magnetic field to be monitored to a precision of 0.10.1\,pT. A major component of its infrastructure is the superconducting capillary shields, which screen the input lines of the SQUIDs from the pick up of spurious magnetic fields that will perturb a SQUID's measurement. These are shown to have a transverse shielding factor of >1×107> 1 \times 10^{7}, which is a few orders of magnitude greater than the calculated requirement. Efforts to characterise the shielding of the SQUID chips themselves are also discussed. The use of Cryoperm for shields reveals a tension between improved SQUID noise and worse neutron statistics. Investigations show that without it, SQUIDs have an elevated noise when cooled in a substantial magnetic field; with it, magnetostatic simulations suggest that it is detrimental to the polarisation of neutrons in transport. The findings suggest that with proper consideration, it is possible to reach a compromise between the two behaviours. Computational work to develop a simulation of SQUID data is detailed, which is based on the Laplace equation for the magnetic scalar potential. These data are ultimately used in the development of a linear regression technique to determine the volume-averaged magnetic field in the neutron cells. This proves highly effective in determining the fields within the 0.10.1\,pT requirement under certain conditions
    corecore