13,560 research outputs found

    An Analysis of the Consequences of the General Data Protection Regulation on Social Network Research

    Get PDF
    This article examines the principles outlined in the General Data Protection Regulation in the context of social network data. We provide both a practical guide to General Data Protection Regulation--compliant social network data processing, covering aspects such as data collection, consent, anonymization, and data analysis, and a broader discussion of the problems emerging when the general principles on which the regulation is based are instantiated for this research area

    Statistical clustering of temporal networks through a dynamic stochastic block model

    Get PDF
    Statistical node clustering in discrete time dynamic networks is an emerging field that raises many challenges. Here, we explore statistical properties and frequentist inference in a model that combines a stochastic block model (SBM) for its static part with independent Markov chains for the evolution of the nodes groups through time. We model binary data as well as weighted dynamic random graphs (with discrete or continuous edges values). Our approach, motivated by the importance of controlling for label switching issues across the different time steps, focuses on detecting groups characterized by a stable within group connectivity behavior. We study identifiability of the model parameters, propose an inference procedure based on a variational expectation maximization algorithm as well as a model selection criterion to select for the number of groups. We carefully discuss our initialization strategy which plays an important role in the method and compare our procedure with existing ones on synthetic datasets. We also illustrate our approach on dynamic contact networks, one of encounters among high school students and two others on animal interactions. An implementation of the method is available as a R package called dynsbm

    Simplifying resource discovery and access in academic libraries : implementing and evaluating Summon at Huddersfield and Northumbria Universities

    Get PDF
    Facilitating information discovery and maximising value for money from library materials is a key driver for academic libraries, which spend substantial sums of money on journal, database and book purchasing. Users are confused by the complexity of our collections and the multiple platforms to access them and are reluctant to spend time learning about individual resources and how to use them - comparing this unfavourably to popular and intuitive search engines like Google. As a consequence the library may be seen as too complicated and time consuming and many of our most valuable resources remain undiscovered and underused. Federated search tools were the first commercial products to address this problem. They work by using a single search box to interrogate multiple databases (including Library catalogues) and journal platforms. While going some way to address the problem, many users complained that they were still relatively slow, clunky and complicated to use compared to Google or Google Scholar. The emergence of web-scale discovery services in 2009 promised to deal with some of these problems. By harvesting and indexing metadata direct from publishers and local library collections into a single index they facilitate resource discovery and access to multiple library collections (whether in print or electronic form) via a single search box. Users no longer have to negotiate a number of separate platforms to find different types of information and because the data is held in a single unified index searching is fast and easy. In 2009 both Huddersfield and Northumbria Universities purchased Serials Solutions Summon. This case study report describes the selection, implementation and testing of Summon at both Universities drawing out common themes as well as differences; there are suggestions for those who intend to implement Summon in the future and some suggestions for future development

    Quality Properties of Execution Tracing, an Empirical Study

    Get PDF
    The authors are grateful to all the professionals who participated in the focus groups; moreover, they also express special thanks to the management of the companies involved for making the organisation of the focus groups possible.Data are made available in the appendix including the results of the data coding process.The quality of execution tracing impacts the time to a great extent to locate errors in software components; moreover, execution tracing is the most suitable tool, in the majority of the cases, for doing postmortem analysis of failures in the field. Nevertheless, software product quality models do not adequately consider execution tracing quality at present neither do they define the quality properties of this important entity in an acceptable manner. Defining these quality properties would be the first step towards creating a quality model for execution tracing. The current research fills this gap by identifying and defining the variables, i.e., the quality properties, on the basis of which the quality of execution tracing can be judged. The present study analyses the experiences of software professionals in focus groups at multinational companies, and also scrutinises the literature to elicit the mentioned quality properties. Moreover, the present study also contributes to knowledge with the combination of methods while computing the saturation point for determining the number of the necessary focus groups. Furthermore, to pay special attention to validity, in addition to the the indicators of qualitative research: credibility, transferability, dependability, and confirmability, the authors also considered content, construct, internal and external validity
    corecore