52 research outputs found

    Social Network Analysis of Ontology Edit Logs

    Get PDF
    This paper presents an approach applying social network analysis on collaborative edit log data. Semantic Web Wiki and FAO ontologies are given as case studies. A number of users that are editing the same ontology or the same pages can be viewed as a social network of people interacting via the ontology. We propose to represent the edit log files as a graph either of users that are connected if they are editing the same ontology concepts or of concepts that are connected if edited by the same users. We apply social network analysis on such graphs in order to provide some insights into activity of the wiki/ontology editors. Finally, a plugin was developed which provides a comfortable GUI to some of the used analysis techniques, so that the people interested in monitoring the editing activity can perform that analysis and visualization on their own.</span

    Manifestations of Xenophobia in AI Systems

    Full text link
    Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning (ML) fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the potential interplay between AI and xenophobia on social media and recommendation systems, healthcare, immigration, employment, as well as biases in large pre-trained models. These help inform our recommendations towards an inclusive, xenophilic design of future AI systems

    The signature and cusp geometry of hyperbolic knots

    Full text link
    We introduce a new real-valued invariant called the natural slope of a hyperbolic knot in the 3-sphere, which is defined in terms of its cusp geometry. We show that twice the knot signature and the natural slope differ by at most a constant times the hyperbolic volume divided by the cube of the injectivity radius. This inequality was discovered using machine learning to detect relationships between various knot invariants. It has applications to Dehn surgery and to 4-ball genus. We also show a refined version of the inequality where the upper bound is a linear function of the volume, and the slope is corrected by terms corresponding to short geodesics that link the knot an odd number of times.Comment: 26 pages, 12 figure

    Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero

    Full text link
    Artificial Intelligence (AI) systems have made remarkable progress, attaining super-human performance across various domains. This presents us with an opportunity to further human knowledge and improve human expert performance by leveraging the hidden knowledge encoded within these highly performant AI systems. Yet, this knowledge is often hard to extract, and may be hard to understand or learn from. Here, we show that this is possible by proposing a new method that allows us to extract new chess concepts in AlphaZero, an AI system that mastered the game of chess via self-play without human supervision. Our analysis indicates that AlphaZero may encode knowledge that extends beyond the existing human knowledge, but knowledge that is ultimately not beyond human grasp, and can be successfully learned from. In a human study, we show that these concepts are learnable by top human experts, as four top chess grandmasters show improvements in solving the presented concept prototype positions. This marks an important first milestone in advancing the frontier of human knowledge by leveraging AI; a development that could bear profound implications and help us shape how we interact with AI systems across many AI applications.Comment: 61 pages, 29 figure

    Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

    Full text link
    Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification benchmark, limiting their applicability in performance-critical settings. Building on prior theoretical insights from ReLIC [Mitrovic et al., 2021], we include additional inductive biases into self-supervised learning. We propose a new self-supervised representation learning method, ReLICv2, which combines an explicit invariance loss with a contrastive objective over a varied set of appropriately constructed data views to avoid learning spurious correlations and obtain more informative representations. ReLICv2 achieves 77.1%77.1\% top-11 accuracy on ImageNet under linear evaluation on a ResNet50, thus improving the previous state-of-the-art by absolute +1.5%+1.5\%; on larger ResNet models, ReLICv2 achieves up to 80.6%80.6\% outperforming previous self-supervised approaches with margins up to +2.3%+2.3\%. Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures. Using ReLICv2, we also learn more robust and transferable representations that generalize better out-of-distribution than previous work, both on image classification and semantic segmentation. Finally, we show that despite using ResNet encoders, ReLICv2 is comparable to state-of-the-art self-supervised vision transformers

    Correcting the Hub Occurrence Prediction Bias in Many Dimensions

    Get PDF
    Data reduction is a common pre-processing step for k-nearest neighbor classification (kNN). The existing prototype selection methods implement different criteria for selecting relevant points to use in classification, which constitutes a selection bias. This study examines the nature of the instance selection bias in intrinsically high-dimensional data. In high-dimensional feature spaces, hubs are known to emerge as centers of influence in kNN classification. These points dominate most kNN sets and are often detrimental to classification performance. Our experiments reveal that different instance selection strategies bias the predictions of the behavior of hub-points in high-dimensional data in different ways. We propose to introduce an intermediate un-biasing step when training the neighbor occurrence models and we demonstrate promising improvements in various hubness-aware classification methods, on a wide selection of high-dimensional synthetic and real-world datasets

    Classification on imbalanced data sets, taking advantage of errors to improve performance

    Get PDF
    Classification methods usually exhibit a poor performance when they are applied on imbalanced data sets. In order to overcome this problem, some algorithms have been proposed in the last decade. Most of them generate synthetic instances in order to balance data sets, regardless the classification algorithm. These methods work reasonably well in most cases; however, they tend to cause over-fitting. In this paper, we propose a method to face the imbalance problem. Our approach, which is very simple to implement, works in two phases; the first one detects instances that are difficult to predict correctly for classification methods. These instances are then categorized into “noisy” and “secure”, where the former refers to those instances whose most of their nearest neighbors belong to the opposite class. The second phase of our method, consists in generating a number of synthetic instances for each one of those that are difficult to predict correctly. After applying our method to data sets, the AUC area of classifiers is improved dramatically. We compare our method with others of the state-of-the-art, using more than 10 data sets

    Diversifying AI: Towards Creative Chess with AlphaZero

    Full text link
    In recent years, Artificial Intelligence (AI) systems have surpassed human intelligence in a variety of computational tasks. However, AI systems, like humans, make mistakes, have blind spots, hallucinate, and struggle to generalize to new situations. This work explores whether AI can benefit from creative decision-making mechanisms when pushed to the limits of its computational rationality. In particular, we investigate whether a team of diverse AI systems can outperform a single AI in challenging tasks by generating more ideas as a group and then selecting the best ones. We study this question in the game of chess, the so-called drosophila of AI. We build on AlphaZero (AZ) and extend it to represent a league of agents via a latent-conditioned architecture, which we call AZ_db. We train AZ_db to generate a wider range of ideas using behavioral diversity techniques and select the most promising ones with sub-additive planning. Our experiments suggest that AZ_db plays chess in diverse ways, solves more puzzles as a group and outperforms a more homogeneous team. Notably, AZ_db solves twice as many challenging puzzles as AZ, including the challenging Penrose positions. When playing chess from different openings, we notice that players in AZ_db specialize in different openings, and that selecting a player for each opening using sub-additive planning results in a 50 Elo improvement over AZ. Our findings suggest that diversity bonuses emerge in teams of AI agents, just as they do in teams of humans and that diversity is a valuable asset in solving computationally hard problems
    • …
    corecore