96 research outputs found

    Synchronization in random networks with given expected degree sequences

    Get PDF
    Synchronization in random networks with given expected degree sequences is studied. We also investigate in details the synchronization in networks whose topology is described by classical random graphs, power-law random graphs and hybrid graphs when N goes to infinity. In particular, we show that random graphs almost surely synchronize. We also show that adding small number of global edges to a local graph makes the corresponding hybrid graph to synchroniz

    Synchronization in Networks of Hindmarsh-Rose Neurons

    Get PDF
    Synchronization is deemed to play an important role in information processing in many neuronal systems. In this work, using a well known technique due to Pecora and Carroll, we investigate the existence of a synchronous state and the bifurcation diagram of a network of synaptically coupled neurons described by the Hindmarsh-Rose model. Through the analysis of the bifurcation diagram, the different dynamics of the possible synchronous states are evidenced. Furthermore, the influence of the topology on the synchronization properties of the network is shown through an exampl

    Synchronization in random networks with given expected degree sequences

    Get PDF
    Synchronization in random networks with given expected degree sequences is studied. We also investigate in details the synchronization in networks whose topology is described by classical random graphs, power-law random graphs and hybrid graphs when N goes to infinity. In particular, we show that random graphs almost surely synchronize. We also show that adding small number of global edges to a local graph makes the corresponding hybrid graph to synchronize

    The evolution of power and standard Wikidata editors: comparing editing behavior over time to predict lifespan and volume of edits

    Get PDF
    Knowledge bases are becoming a key asset leveraged for various types of applications on the Web, from search engines presenting ‘entity cards’ as the result of a query, to the use of structured data of knowledge bases to empower virtual personal assistants. Wikidata is an open general-interest knowledge base that is collaboratively developed and maintained by a community of thousands of volunteers. One of the major challenges faced in such a crowdsourcing project is to attain a high level of editor engagement. In order to intervene and encourage editors to be more committed to editing Wikidata, it is important to be able to predict at an early stage, whether an editor will or not become an engaged editor. In this paper, we investigate this problem and study the evolution that editors with different levels of engagement exhibit in their editing behaviour over time. We measure an editor’s engagement in terms of (i) the volume of edits provided by the editor and (ii) their lifespan (i.e. the length of time for which an editor is present at Wikidata). The large-scale longitudinal data analysis that we perform covers Wikidata edits over almost 4 years. We monitor evolution in a session-by-session- and monthly-basis, observing the way the participation, the volume and the diversity of edits done by Wikidata editors change. Using the findings in our exploratory analysis, we define and implement prediction models that use the multiple evolution indicators

    OpenNym: privacy preserving recommending via pseudonymous group authentication

    Get PDF
    A user accessing an online recommender system typically has two choices: either agree to be uniquely identified and in return receive a personalized and rich experience, or try to use the service anonymously but receive a degraded non-personalized service. In this paper, we offer a third option to this “all or nothing” paradigm, namely use a web service with a public group identity, that we refer to as an OpenNym identity, which provides users with a degree of anonymity while still allowing useful personalization of the web service. Our approach can be implemented as a browser shim that is backward compatible with existing services and as an example, we demonstrate operation with the Movielens online service. We exploit the fact that users can often be clustered into groups having similar preferences and in this way, increased privacy need not come at the cost of degraded service. Indeed use of the OpenNym approach with Movielens improves personalization performance

    AI-assisted peer review

    Get PDF
    The scientific literature peer review workflow is under strain because of the constant growth of submission volume. One response to this is to make initial screening of submissions less time intensive. Reducing screening and review time would save millions of working hours and potentially boost academic productivity. Many platforms have already started to use automated screening tools, to prevent plagiarism and failure to respect format requirements. Some tools even attempt to flag the quality of a study or summarise its content, to reduce reviewers’ load. The recent advances in artificial intelligence (AI) create the potential for (semi) automated peer review systems, where potentially low-quality or controversial studies could be flagged, and reviewer-document matching could be performed in an automated manner. However, there are ethical concerns, which arise from such approaches, particularly associated with bias and the extent to which AI systems may replicate bias. Our main goal in this study is to discuss the potential, pitfalls, and uncertainties of the use of AI to approximate or assist human decisions in the quality assurance and peer-review process associated with research outputs. We design an AI tool and train it with 3300 papers from three conferences, together with their reviews evaluations. We then test the ability of the AI in predicting the review score of a new, unobserved manuscript, only using its textual content. We show that such techniques can reveal correlations between the decision process and other quality proxy measures, uncovering potential biases of the review process. Finally, we discuss the opportunities, but also the potential unintended consequences of these techniques in terms of algorithmic bias and ethical concerns

    All those wasted hours: On task abandonment in crowdsourcing

    Get PDF
    Crowdsourcing has become a standard methodology to collect manually annotated data such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or FigureEight, crowd workers select tasks to work on based on different dimensions such as task reward and requester reputation. Requesters then receive the judgments of workers who self-selected into the tasks and completed them successfully. Several crowd workers, however, preview tasks, begin working on them, reaching varying stages of task completion without finally submitting their work. Such behavior results in unrewarded effort which remains invisible to requesters. In this paper, we conduct the first investigation into the phenomenon of task abandonment, the act of workers previewing or beginning a task and deciding not to complete it. We follow a threefold methodology which includes 1) investigating the prevalence and causes of task abandonment by means of a survey over different crowdsourcing platforms, 2) data-driven analyses of logs collected during a large-scale relevance judgment experiment, and 3) controlled experiments measuring the effect of different dimensions on abandonment. Our results show that task abandonment is a widely spread phenomenon. Apart from accounting for a considerable amount of wasted human effort, this bears important implications on the hourly wages of workers as they are not rewarded for tasks that they do not complete. We also show how task abandonment may have strong implications on the use of collected data (for example, on the evaluation of IR systems)

    Integrating FATE/critical data studies into data science curricula : where are we going and how do we get there?

    Get PDF
    There have been multiple calls for integrating topics related to fairness, accountability, transparency, ethics (FATE) and social justice into Data Science curricula, but little exploration of how this might work in practice. This paper presents the findings of a collaborative auto-ethnography (CAE) engaged in by a MSc Data Science teaching team based at University of Sheffield (UK) Information School where FATE/Critical Data Studies (CDS) topics have been a core part of the curriculum since 2015/16. In this paper, we adopt the CAE approach to reflect on our experiences of working at the intersection of disciplines, and our progress and future plans for integrating FATE/CDS into the curriculum. We identify a series of challenges for deeper FATE/CDS integration related to our own competencies and the wider socio-material context of Higher Education in the UK. We conclude with recommendations for ourselves and the wider FATE/CDS orientated Data Science community
    • 

    corecore